Sample records for object-relational database management

  1. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  2. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  3. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  4. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    NASA Technical Reports Server (NTRS)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  5. An Extensible "SCHEMA-LESS" Database Framework for Managing High-Throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  6. An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.

  7. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  8. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  9. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  10. A case study for a digital seabed database: Bohai Sea engineering geology database

    NASA Astrophysics Data System (ADS)

    Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang

    2006-07-01

    This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.

  11. Object-oriented structures supporting remote sensing databases

    NASA Technical Reports Server (NTRS)

    Wichmann, Keith; Cromp, Robert F.

    1995-01-01

    Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.

  12. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  13. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  14. Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana

    2002-01-01

    Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.

  15. Context indexing of digital cardiac ultrasound records in PACS

    NASA Astrophysics Data System (ADS)

    Lobodzinski, S. Suave; Meszaros, Georg N.

    1998-07-01

    Recent wide adoption of the DICOM 3.0 standard by ultrasound equipment vendors created a need for practical clinical implementations of cardiac imaging study visualization, management and archiving, DICOM 3.0 defines only a logical and physical format for exchanging image data (still images, video, patient and study demographics). All DICOM compliant imaging studies must presently be archived on a 650 Mb recordable compact disk. This is a severe limitation for ultrasound applications where studies of 3 to 10 minutes long are a common practice. In addition, DICOM digital echocardiography objects require physiological signal indexing, content segmentation and characterization. Since DICOM 3.0 is an interchange standard only, it does not define how to database composite video objects. The goal of this research was therefore to address the issues of efficient storage, retrieval and management of DICOM compliant cardiac video studies in a distributed PACS environment. Our Web based implementation has the advantage of accommodating both DICOM defined entity-relation modules (equipment data, patient data, video format, etc.) in standard relational database tables and digital indexed video with its attributes in an object relational database. Object relational data model facilitates content indexing of full motion cardiac imaging studies through bi-directional hyperlink generation that tie searchable video attributes and related objects to individual video frames in the temporal domain. Benefits realized from use of bi-directionally hyperlinked data models in an object relational database include: (1) real time video indexing during image acquisition, (2) random access and frame accurate instant playback of previously recorded full motion imaging data, and (3) time savings from faster and more accurate access to data through multiple navigation mechanisms such as multidimensional queries on an index, queries on a hyperlink attribute, free search and browsing.

  16. Database Technology Activities and Assessment for Defense Modeling and Simulation Office (DMSO) (August 1991-November 1992). A Documented Briefing

    DTIC Science & Technology

    1994-01-01

    databases and identifying new data entities, data elements, and relationships . - Standard data naming conventions, schema, and definition processes...management system. The use of such a tool could offer: (1) structured support for representation of objects and their relationships to each other (and...their relationships to related multimedia objects such as an engineering drawing of the tank object or a satellite image that contains the installation

  17. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  18. A case Study of Applying Object-Relational Persistence in Astronomy Data Archiving

    NASA Astrophysics Data System (ADS)

    Yao, S. S.; Hiriart, R.; Barg, I.; Warner, P.; Gasson, D.

    2005-12-01

    The NOAO Science Archive (NSA) team is developing a comprehensive domain model to capture the science data in the archive. Java and an object model derived from the domain model weil address the application layer of the archive system. However, since RDBMS is the best proven technology for data management, the challenge is the paradigm mismatch between the object and the relational models. Transparent object-relational mapping (ORM) persistence is a successful solution to this challenge. In the data modeling and persistence implementation of NSA, we are using Hibernate, a well-accepted ORM tool, to bridge the object model in the business tier and the relational model in the database tier. Thus, the database is isolated from the Java application. The application queries directly on objects using a DBMS-independent object-oriented query API, which frees the application developers from the low level JDBC and SQL so that they can focus on the domain logic. We present the detailed design of the NSA R3 (Release 3) data model and object-relational persistence, including mapping, retrieving and caching. Persistence layer optimization and performance tuning will be analyzed. The system is being built on J2EE, so the integration of Hibernate into the EJB container and the transaction management are also explored.

  19. The representation of manipulable solid objects in a relational database

    NASA Technical Reports Server (NTRS)

    Bahler, D.

    1984-01-01

    This project is concerned with the interface between database management and solid geometric modeling. The desirability of integrating computer-aided design, manufacture, testing, and management into a coherent system is by now well recognized. One proposed configuration for such a system uses a relational database management system as the central focus; the various other functions are linked through their use of a common data repesentation in the data manager, rather than communicating pairwise to integrate a geometric modeling capability with a generic relational data managemet system in such a way that well-formed questions can be posed and answered about the performance of the system as a whole. One necessary feature of any such system is simplification for purposes of anaysis; this and system performance considerations meant that a paramount goal therefore was that of unity and simplicity of the data structures used.

  20. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    PubMed

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  1. Cardiological database management system as a mediator to clinical decision support.

    PubMed

    Pappas, C; Mavromatis, A; Maglaveras, N; Tsikotis, A; Pangalos, G; Ambrosiadou, V

    1996-03-01

    An object-oriented medical database management system is presented for a typical cardiologic center, facilitating epidemiological trials. Object-oriented analysis and design were used for the system design, offering advantages for the integrity and extendibility of medical information systems. The system was developed using object-oriented design and programming methodology, the C++ language and the Borland Paradox Relational Data Base Management System on an MS-Windows NT environment. Particular attention was paid to system compatibility, portability, the ease of use, and the suitable design of the patient record so as to support the decisions of medical personnel in cardiovascular centers. The system was designed to accept complex, heterogeneous, distributed data in various formats and from different kinds of examinations such as Holter, Doppler and electrocardiography.

  2. Design and implementation of a biomedical image database (BDIM).

    PubMed

    Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R

    1988-01-01

    We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  3. An object-oriented approach to the management of meteorological and hydrological data

    NASA Technical Reports Server (NTRS)

    Graves, S. J.; Williams, S. F.; Criswell, E. A.

    1990-01-01

    An interface to several meteorological and hydrological databases have been developed that enables researchers efficiently to access and interrelate data through a customized menu system. By extending a relational database system with object-oriented concepts, each user or group of users may have different 'views' of the data to allow user access to data in customized ways without altering the organization of the database. An application to COHMEX and WetNet, two earth science projects within NASA Marshall Space Flight Center's Earth Science and Applications Division, are described.

  4. Object-orientated DBMS techniques for time-oriented medical record.

    PubMed

    Pinciroli, F; Combi, C; Pozzi, G

    1992-01-01

    In implementing time-orientated medical record (TOMR) management systems, use of a relational model played a big role. Many applications have been developed to extend query and data manipulation languages to temporal aspects of information. Our experience in developing TOMR revealed some deficiencies inside the relational model, such as: (a) abstract data type definition; (b) unified view of data, at a programming level; (c) management of temporal data; (d) management of signals and images. We identified some first topics to face by an object-orientated approach to database design. This paper describes the first steps in designing and implementing a TOMR by an object-orientated DBMS.

  5. Utilizing LIDAR data to analyze access management criteria in Utah.

    DOT National Transportation Integrated Search

    2017-05-01

    The primary objective of this research was to increase understanding of the safety impacts across the state related to access management. This was accomplished by using the Light Detection and Ranging (LiDAR) database to evaluate driveway spacing and...

  6. An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.

    ERIC Educational Resources Information Center

    Chen, I-Min A.; Markowitz, Victor M.

    1995-01-01

    Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…

  7. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  8. Implementation of a Computerized Maintenance Management System

    NASA Technical Reports Server (NTRS)

    Shen, Yong-Hong; Askari, Bruce

    1994-01-01

    A primer Computerized Maintenance Management System (CMMS) has been established for NASA Ames pressure component certification program. The CMMS takes full advantage of the latest computer technology and SQL relational database to perform periodic services for vital pressure components. The Ames certification program is briefly described and the aspects of the CMMS implementation are discussed as they are related to the certification objectives.

  9. Data structures and organisation: Special problems in scientific applications

    NASA Astrophysics Data System (ADS)

    Read, Brian J.

    1989-12-01

    In this paper we discuss and offer answers to the following questions: What, really, are the benifits of databases in physics? Are scientific databases essentially different from conventional ones? What are the drawbacks of a commercial database management system for use with scientific data? Do they outweigh the advantages? Do databases systems have adequate graphics facilities, or is a separate graphics package necessary? SQL as a standard language has deficiencies, but what are they for scientific data in particular? Indeed, is the relational model appropriate anyway? Or, should we turn to object oriented databases?

  10. Extending the data dictionary for data/knowledge management

    NASA Technical Reports Server (NTRS)

    Hydrick, Cecile L.; Graves, Sara J.

    1988-01-01

    Current relational database technology provides the means for efficiently storing and retrieving large amounts of data. By combining techniques learned from the field of artificial intelligence with this technology, it is possible to expand the capabilities of such systems. This paper suggests using the expanded domain concept, an object-oriented organization, and the storing of knowledge rules within the relational database as a solution to the unique problems associated with CAD/CAM and engineering data.

  11. The utilization of neural nets in populating an object-oriented database

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Hill, Scott E.; Cromp, Robert F.

    1989-01-01

    Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms (i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.

  12. The Kepler DB: a database management system for arrays, sparse arrays, and binary data

    NASA Astrophysics Data System (ADS)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-07-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30 minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database management system (Kepler DB)was created to act as the repository of this information. After one year of flight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one-dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  13. The Kepler DB, a Database Management System for Arrays, Sparse Arrays and Binary Data

    NASA Technical Reports Server (NTRS)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-01-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30-minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database (Kepler DB) management system was created to act as the repository of this information. After one year of ight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  14. Using an image-extended relational database to support content-based image retrieval in a PACS.

    PubMed

    Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M

    2005-12-01

    This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.

  15. Clinical Views: Object-Oriented Views for Clinical Databases

    PubMed Central

    Portoni, Luisa; Combi, Carlo; Pinciroli, Francesco

    1998-01-01

    We present here a prototype of a clinical information system for the archiving and the management of multimedia and temporally-oriented clinical data related to PTCA patients. The system is based on an object-oriented DBMS and supports multiple views and view schemas on patients' data. Remote data access is supported too.

  16. Insertion algorithms for network model database management systems

    NASA Astrophysics Data System (ADS)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  17. Integrated Space Asset Management Database and Modeling

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust environment that can be extended and expanded indefinitely.

  18. Rapid development of entity-based data models for bioinformatics with persistence object-oriented design and structured interfaces.

    PubMed

    Ezra Tsur, Elishai

    2017-01-01

    Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.

  19. Information Management of Web Application Based Environmental Performance Management in Concentrating Division of PTFI

    NASA Astrophysics Data System (ADS)

    Susanto, Arif; Mulyono, Nur Budi

    2018-02-01

    The changes of environmental management system standards into the latest version, i.e. ISO 14001:2015, may cause a change on a data and information need in decision making and achieving the objectives in the organization coverage. Information management is the organization's responsibility to ensure that effectiveness and efficiency start from its creating, storing, processing and distribution processes to support operations and effective decision making activity in environmental performance management. The objective of this research was to set up an information management program and to adopt the technology as the supporting component of the program which was done by PTFI Concentrating Division so that it could be in line with the desirable organization objective in environmental management based on ISO 14001:2015 environmental management system standards. Materials and methods used covered technical aspects in information management, i.e. with web-based application development by using usage centered design. The result of this research showed that the use of Single Sign On gave ease to its user to interact further on the use of the environmental management system. Developing a web-based through creating entity relationship diagram (ERD) and information extraction by conducting information extraction which focuses on attributes, keys, determination of constraints. While creating ERD is obtained from relational database scheme from a number of database from environmental performances in Concentrating Division.

  20. Design and Implementation of CNEOST Image Database Based on NoSQL System

    NASA Astrophysics Data System (ADS)

    Wang, X.

    2013-07-01

    The China Near Earth Object Survey Telescope (CNEOST) is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgradation of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.

  1. Design and Implementation of CNEOST Image Database Based on NoSQL System

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2014-04-01

    The China Near Earth Object Survey Telescope is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgrade of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of the massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.

  2. Short Fiction on Film: A Relational DataBase.

    ERIC Educational Resources Information Center

    May, Charles

    Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…

  3. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    NASA Astrophysics Data System (ADS)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford

  4. [The design and implementation of the web typical surface object spectral information system in arid areas based on .NET and SuperMap].

    PubMed

    Xia, Jun; Tashpolat, Tiyip; Zhang, Fei; Ji, Hong-jiang

    2011-07-01

    The characteristic of object spectrum is not only the base of the quantification analysis of remote sensing, but also the main content of the basic research of remote sensing. The typical surface object spectral database in arid areas oasis is of great significance for applied research on remote sensing in soil salinization. In the present paper, the authors took the Ugan-Kuqa River Delta Oasis as an example, unified .NET and the SuperMap platform with SQL Server database stored data, used the B/S pattern and the C# language to design and develop the typical surface object spectral information system, and established the typical surface object spectral database according to the characteristics of arid areas oasis. The system implemented the classified storage and the management of typical surface object spectral information and the related attribute data of the study areas; this system also implemented visualized two-way query between the maps and attribute data, the drawings of the surface object spectral response curves and the processing of the derivative spectral data and its drawings. In addition, the system initially possessed a simple spectral data mining and analysis capabilities, and this advantage provided an efficient, reliable and convenient data management and application platform for the Ugan-Kuqa River Delta Oasis's follow-up study in soil salinization. Finally, It's easy to maintain, convinient for secondary development and practically operating in good condition.

  5. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    PubMed

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  6. Integration and management of massive remote-sensing data based on GeoSOT subdivision model

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Cheng, Chengqi; Chen, Bo; Meng, Li

    2016-07-01

    Owing to the rapid development of earth observation technology, the volume of spatial information is growing rapidly; therefore, improving query retrieval speed from large, rich data sources for remote-sensing data management systems is quite urgent. A global subdivision model, geographic coordinate subdivision grid with one-dimension integer coding on 2n-tree, which we propose as a solution, has been used in data management organizations. However, because a spatial object may cover several grids, ample data redundancy will occur when data are stored in relational databases. To solve this redundancy problem, we first combined the subdivision model with the spatial array database containing the inverted index. We proposed an improved approach for integrating and managing massive remote-sensing data. By adding a spatial code column in an array format in a database, spatial information in remote-sensing metadata can be stored and logically subdivided. We implemented our method in a Kingbase Enterprise Server database system and compared the results with the Oracle platform by simulating worldwide image data. Experimental results showed that our approach performed better than Oracle in terms of data integration and time and space efficiency. Our approach also offers an efficient storage management system for existing storage centers and management systems.

  7. Distributed Access View Integrated Database (DAVID) system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  8. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    NASA Astrophysics Data System (ADS)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  9. Health and productivity as a business strategy.

    PubMed

    Loeppke, Ronald; Taitel, Michael; Richling, Dennis; Parry, Thomas; Kessler, Ronald C; Hymel, Pam; Konicki, Doris

    2007-07-01

    The objective of this study is to assess the magnitude of health-related lost productivity relative to medical and pharmacy costs for four employers and assess the business implications of a "full-cost" approach to managing health. A database was developed by integrating medical and pharmacy claims data with employee self-report productivity and health information collected through the Health and Work Performance Questionnaire (HPQ). Information collected on employer business measures were combined with this database to model health-related lost productivity. 1) Health-related productivity costs were more than four times greater than medical and pharmacy costs. 2) The full cost of poor health is driven by different health conditions than those driving medical and pharmacy costs alone. This study demonstrates that Integrated Population Health & Productivity Management should be built on a foundation of Integrated Population Health & Productivity Measurement. Therefore, employers would reveal a blueprint for action for their integrated health and productivity enhancement strategies by measuring the full health and productivity costs related to the burdens of illness and health risk in their population.

  10. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  11. Impact of data base structure in a successful in vitro-in vivo correlation for pharmaceutical products.

    PubMed

    Roudier, B; Davit, B; Schütz, H; Cardot, J-M

    2015-01-01

    The in vitro-in vivo correlation (IVIVC) (Food and Drug Administration 1997) aims to predict performances in vivo of a pharmaceutical formulation based on its in vitro characteristics. It is a complex process that (i) incorporates in a gradual and incremental way a large amount of information and (ii) requires information from different properties (formulation, analytical, clinical) and associated dedicated treatments (statistics, modeling, simulation). These results in many studies that are initiated and integrated into the specifications (quality target product profile, QTPP). This latter defines the appropriate experimental designs (quality by design, QbD) (Food and Drug Administration 2011, 2012) whose main objectives are determination (i) of key factors of development and manufacturing (critical process parameters, CPPs) and (ii) of critical points of physicochemical nature relating to active ingredients (API) and critical quality attribute (CQA) which may have implications in terms of efficiency, safety, and inoffensiveness for the patient, due to their non-inclusion. These processes generate a very large amount of data that is necessary to structure. In this context, the storage of information in a database (DB) and the management of this database (database management system, DBMS) become an important issue for the management of projects and IVIVC and more generally for development of new pharmaceutical forms. This article describes the implementation of a prototype object-oriented database (OODB) considered as a tool, which is helpful for decision taking, responding in a structured and consistent way to the issues of project management of IVIVC (including bioequivalence and bioavailability) (Food and Drug Administration 2003) necessary for the implementation of QTPP.

  12. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    ERIC Educational Resources Information Center

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  13. Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.

    PubMed

    Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel

    2012-01-01

    Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012

  14. Solutions in radiology services management: a literature review*

    PubMed Central

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    Objective The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Materials and Methods Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. Results In the databases, 565 papers – 120 out of them, pdf free – were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Conclusion Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies. PMID:26543281

  15. Database & information tools for transportation research management : Connecticut transportation research peer exchange report of a thematic peer exchange.

    DOT National Transportation Integrated Search

    2006-05-01

    Specific objectives of the Peer Exchange were: : Discuss and exchange information about databases and other software : used to support the program-cycles managed by state transportation : research offices. Elements of the program cycle include: :...

  16. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  17. Dynamic taxonomies applied to a web-based relational database for geo-hydrological risk mitigation

    NASA Astrophysics Data System (ADS)

    Sacco, G. M.; Nigrelli, G.; Bosio, A.; Chiarle, M.; Luino, F.

    2012-02-01

    In its 40 years of activity, the Research Institute for Geo-hydrological Protection of the Italian National Research Council has amassed a vast and varied collection of historical documentation on landslides, muddy-debris flows, and floods in northern Italy from 1600 to the present. Since 2008, the archive resources have been maintained through a relational database management system. The database is used for routine study and research purposes as well as for providing support during geo-hydrological emergencies, when data need to be quickly and accurately retrieved. Retrieval speed and accuracy are the main objectives of an implementation based on a dynamic taxonomies model. Dynamic taxonomies are a general knowledge management model for configuring complex, heterogeneous information bases that support exploratory searching. At each stage of the process, the user can explore or browse the database in a guided yet unconstrained way by selecting the alternatives suggested for further refining the search. Dynamic taxonomies have been successfully applied to such diverse and apparently unrelated domains as e-commerce and medical diagnosis. Here, we describe the application of dynamic taxonomies to our database and compare it to traditional relational database query methods. The dynamic taxonomy interface, essentially a point-and-click interface, is considerably faster and less error-prone than traditional form-based query interfaces that require the user to remember and type in the "right" search keywords. Finally, dynamic taxonomy users have confirmed that one of the principal benefits of this approach is the confidence of having considered all the relevant information. Dynamic taxonomies and relational databases work in synergy to provide fast and precise searching: one of the most important factors in timely response to emergencies.

  18. The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

    NASA Astrophysics Data System (ADS)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo

    2015-05-01

    The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.

  19. A Method to Calculate and Analyze Residents' Evaluations by Using a Microcomputer Data-Base Management System.

    ERIC Educational Resources Information Center

    Mills, Myron L.

    1988-01-01

    A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)

  20. Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.

    ERIC Educational Resources Information Center

    Gutmann, Myron P.; And Others

    1989-01-01

    Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)

  1. Toward Soil Spatial Information Systems (SSIS) for global modeling and ecosystem management

    NASA Technical Reports Server (NTRS)

    Baumgardner, Marion F.

    1995-01-01

    The general objective is to conduct research to contribute toward the realization of a world soils and terrain (SOTER) database, which can stand alone or be incorporated into a more complete and comprehensive natural resources digital information system. The following specific objectives are focussed on: (1) to conduct research related to (a) translation and correlation of different soil classification systems to the SOTER database legend and (b) the inferfacing of disparate data sets in support of the SOTER Project; (2) to examine the potential use of AVHRR (Advanced Very High Resolution Radiometer) data for delineating meaningful soils and terrain boundaries for small scale soil survey (range of scale: 1:250,000 to 1:1,000,000) and terrestrial ecosystem assessment and monitoring; and (3) to determine the potential use of high dimensional spectral data (220 reflectance bands with 10 m spatial resolution) for delineating meaningful soils boundaries and conditions for the purpose of detailed soil survey and land management.

  2. Managing Objects in a Relational Framework

    DTIC Science & Technology

    1989-01-01

    Database Week, San Jose CA, May.1983, pp.107-113. [Stonebraker 85] Stonebraker,M. and Rowe,L.: "The Design of POSTGRES " Tech.Report UC Berkeley, Nov...latter is equivalent to the definition of an attribute in a POSTGRES relation using the generic Quel facility. Recently, recursive query languages have...utilize rewrite rules. OSQL [Lynl 88] provides a language for associative access. 2. The POSTGRES model [Sto 86] allows Quel and C-procedures as the

  3. [The future of clinical laboratory database management system].

    PubMed

    Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y

    1999-09-01

    To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.

  4. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  5. Organization of Heterogeneous Scientific Data Using the EAV/CR Representation

    PubMed Central

    Nadkarni, Prakash M.; Marenco, Luis; Chen, Roland; Skoufos, Emmanouil; Shepherd, Gordon; Miller, Perry

    1999-01-01

    Entity-attribute-value (EAV) representation is a means of organizing highly heterogeneous data using a relatively simple physical database schema. EAV representation is widely used in the medical domain, most notably in the storage of data related to clinical patient records. Its potential strengths suggest its use in other biomedical areas, in particular research databases whose schemas are complex as well as constantly changing to reflect evolving knowledge in rapidly advancing scientific domains. When deployed for such purposes, the basic EAV representation needs to be augmented significantly to handle the modeling of complex objects (classes) as well as to manage interobject relationships. The authors refer to their modification of the basic EAV paradigm as EAV/CR (EAV with classes and relationships). They describe EAV/CR representation with examples from two biomedical databases that use it. PMID:10579606

  6. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    PubMed

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Managing Data in a GIS Environment

    NASA Technical Reports Server (NTRS)

    Beltran, Maria; Yiasemis, Haris

    1997-01-01

    A Geographic Information System (GIS) is a computer-based system that enables capture, modeling, manipulation, retrieval, analysis and presentation of geographically referenced data. A GIS operates in a dynamic environment of spatial and temporal information. This information is held in a database like any other information system, but performance is more of an issue for a geographic database than a traditional database due to the nature of the data. What distinguishes a GIS from other information systems is the spatial and temporal dimensions of the data and the volume of data (several gigabytes). Most traditional information systems are usually based around tables and textual reports, whereas GIS requires the use of cartographic forms and other visualization techniques. Much of the data can be represented using computer graphics, but a GIS is not a graphics database. A graphical system is concerned with the manipulation and presentation of graphical objects whereas a GIS handles geographic objects that have not only spatial dimensions but non-visual, i e., attribute and components. Furthermore, the nature of the data on which a GIS operates makes the traditional relational database approach inadequate for retrieving data and answering queries that reference spatial data. The purpose of this paper is to describe the efficiency issues behind storage and retrieval of data within a GIS database. Section 2 gives a general background on GIS, and describes the issues involved in custom vs. commercial and hybrid vs. integrated geographic information systems. Section 3 describes the efficiency issues concerning the management of data within a GIS environment. The paper ends with a summary of the main concerns of this paper.

  8. In-game Management of Common Joint Dislocations

    PubMed Central

    Skelley, Nathan W.; McCormick, Jeremy J.; Smith, Matthew V.

    2014-01-01

    Context: Sideline management of sports-related joint dislocations often places the treating medical professional in a challenging position. These injuries frequently require prompt evaluation, diagnosis, reduction, and postreduction management before they can be evaluated at a medical facility. Our objective is to review the mechanism, evaluation, reduction, and postreduction management of sports-related dislocations to the shoulder, elbow, finger, knee, patella, and ankle joints. Evidence Acquisition: A literature review was performed using the PubMed database to evaluate previous and current publications focused on joint dislocations. This review focused on articles published between 1980 and 2013. Study Design: Clinical review. Level of Evidence: Level 4. Results: The clinician should weigh the benefits and risks of on-field reduction based on their knowledge of the injury and the presence of associated injuries. Conclusion: When properly evaluated and diagnosed, most sports-related dislocations can be reduced and initially managed at the game. PMID:24790695

  9. Microcomputer Database Management Systems for Bibliographic Data.

    ERIC Educational Resources Information Center

    Pollard, Richard

    1986-01-01

    Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)

  10. Searching Across the International Space Station Databases

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana

    2007-01-01

    Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.

  11. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  12. Optics Toolbox: An Intelligent Relational Database System For Optical Designers

    NASA Astrophysics Data System (ADS)

    Weller, Scott W.; Hopkins, Robert E.

    1986-12-01

    Optical designers were among the first to use the computer as an engineering tool. Powerful programs have been written to do ray-trace analysis, third-order layout, and optimization. However, newer computing techniques such as database management and expert systems have not been adopted by the optical design community. For the purpose of this discussion we will define a relational database system as a database which allows the user to specify his requirements using logical relations. For example, to search for all lenses in a lens database with a F/number less than two, and a half field of view near 28 degrees, you might enter the following: FNO < 2.0 and FOV of 28 degrees ± 5% Again for the purpose of this discussion, we will define an expert system as a program which contains expert knowledge, can ask intelligent questions, and can form conclusions based on the answers given and the knowledge which it contains. Most expert systems store this knowledge in the form of rules-of-thumb, which are written in an English-like language, and which are easily modified by the user. An example rule is: IF require microscope objective in air and require NA > 0.9 THEN suggest the use of an oil immersion objective The heart of the expert system is the rule interpreter, sometimes called an inference engine, which reads the rules and forms conclusions based on them. The use of a relational database system containing lens prototypes seems to be a viable prospect. However, it is not clear that expert systems have a place in optical design. In domains such as medical diagnosis and petrology, expert systems are flourishing. These domains are quite different from optical design, however, because optical design is a creative process, and the rules are difficult to write down. We do think that an expert system is feasible in the area of first order layout, which is sufficiently diagnostic in nature to permit useful rules to be written. This first-order expert would emulate an expert designer as he interacted with a customer for the first time: asking the right questions, forming conclusions, and making suggestions. With these objectives in mind, we have developed the Optics Toolbox. Optics Toolbox is actually two programs in one: it is a powerful relational database system with twenty-one search parameters, four search modes, and multi-database support, as well as a first-order optical design expert system with a rule interpreter which has full access to the relational database. The system schematic is shown in Figure 1.

  13. The Data Base and Decision Making in Public Schools.

    ERIC Educational Resources Information Center

    Hedges, William D.

    1984-01-01

    Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…

  14. An Evaluation of Database Solutions to Spatial Object Association

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, V S; Kurc, T; Saltz, J

    2008-06-24

    Object association is a common problem encountered in many applications. Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two datasets based on their positions in a common spatial coordinate system--one of the datasets may correspond to a catalog of objects observed over time in a multi-dimensional domain; the other dataset may consist of objects observed in a snapshot of the domain at a time point. The use of database management systems to the solve the object association problem provides portability across different platforms and also greater flexibility. Increasingmore » dataset sizes in today's applications, however, have made object association a data/compute-intensive problem that requires targeted optimizations for efficient execution. In this work, we investigate how database-based crossmatch algorithms can be deployed on different database system architectures and evaluate the deployments to understand the impact of architectural choices on crossmatch performance and associated trade-offs. We investigate the execution of two crossmatch algorithms on (1) a parallel database system with active disk style processing capabilities, (2) a high-throughput network database (MySQL Cluster), and (3) shared-nothing databases with replication. We have conducted our study in the context of a large-scale astronomy application with real use-case scenarios.« less

  15. A Relational/Object-Oriented Database Management System: R/OODBMS

    DTIC Science & Technology

    1992-09-01

    Concepts In 1968, Dr. Edgar F . Codd had the idea that "predicate logic could be applied to maintaining the logical integrity of the data" in a DBMS [CD90, p...Hall, Inc., Englewood Cliffs, NJ, 1990. [Co70] Codd , E. F ., "A Relational Model for Large Shared Data Banks," Communications of the ACM, v. 13, no.6...pp. 377-387 Jun 1970. [CD90] Interview between E. F . Codd and DBMS, "Relational philosopher: the creator of the relational model talks about his

  16. Heterogeneous distributed query processing: The DAVID system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1985-01-01

    The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.

  17. Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System

    PubMed Central

    Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail

    1988-01-01

    This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.

  18. Automatic labeling and characterization of objects using artificial neural networks

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Hill, Scott E.; Cromp, Robert F.

    1989-01-01

    Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms, i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.

  19. Specialized microbial databases for inductive exploration of microbial genome sequences

    PubMed Central

    Fang, Gang; Ho, Christine; Qiu, Yaowu; Cubas, Virginie; Yu, Zhou; Cabau, Cédric; Cheung, Frankie; Moszer, Ivan; Danchin, Antoine

    2005-01-01

    Background The enormous amount of genome sequence data asks for user-oriented databases to manage sequences and annotations. Queries must include search tools permitting function identification through exploration of related objects. Methods The GenoList package for collecting and mining microbial genome databases has been rewritten using MySQL as the database management system. Functions that were not available in MySQL, such as nested subquery, have been implemented. Results Inductive reasoning in the study of genomes starts from "islands of knowledge", centered around genes with some known background. With this concept of "neighborhood" in mind, a modified version of the GenoList structure has been used for organizing sequence data from prokaryotic genomes of particular interest in China. GenoChore , a set of 17 specialized end-user-oriented microbial databases (including one instance of Microsporidia, Encephalitozoon cuniculi, a member of Eukarya) has been made publicly available. These databases allow the user to browse genome sequence and annotation data using standard queries. In addition they provide a weekly update of searches against the world-wide protein sequences data libraries, allowing one to monitor annotation updates on genes of interest. Finally, they allow users to search for patterns in DNA or protein sequences, taking into account a clustering of genes into formal operons, as well as providing extra facilities to query sequences using predefined sequence patterns. Conclusion This growing set of specialized microbial databases organize data created by the first Chinese bacterial genome programs (ThermaList, Thermoanaerobacter tencongensis, LeptoList, with two different genomes of Leptospira interrogans and SepiList, Staphylococcus epidermidis) associated to related organisms for comparison. PMID:15698474

  20. Corridor incident management (CIM)

    DOT National Transportation Integrated Search

    2007-09-01

    The objective of the Corridor Incident Management (CIM) research project was to develop and demonstrate a set of multi-purpose methods, tools and databases to improve corridor incident management in Tennessee, relying primarily on resources already a...

  1. DataHub: Knowledge-based data management for data discovery

    NASA Astrophysics Data System (ADS)

    Handley, Thomas H.; Li, Y. Philip

    1993-08-01

    Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.

  2. Database Management Systems: New Homes for Migrating Bibliographic Records.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Bierbaum, Esther G.

    1987-01-01

    Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…

  3. Efficacy of Auriculotherapy for Constipation in Adults: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Yang, Li-Hua; Du, Shi-Zheng; Sun, Jin-Fang; Mei, Si-Juan; Wang, Xiao-Qing; Zhang, Yuan-Yuan

    2014-01-01

    Abstract Objectives: To assess the clinical evidence of auriculotherapy for constipation treatment and to identify the efficacy of groups using Semen vaccariae or magnetic pellets as taped objects in managing constipation. Methods: Databases were searched, including five English-language databases (the Cochrane Library, PubMed, Embase, CINAHL, and AMED) and four Chinese medical databases. Only randomized controlled trials were included in the review process. Critical appraisal was conducted using the Cochrane risk of bias tool. Results: Seventeen randomized, controlled trials (RCTs) met the inclusion criteria, of which 2 had low risk of bias. The primary outcome measures were the improvement rate and total effective rate. A meta-analysis of 15 RCTs showed a moderate, significant effect of auriculotherapy in managing constipation compared with controls (relative risk [RR], 2.06; 95% confidence interval [CI], 1.52– 2.79; p<0.00001). The 15 RCTs also showed a moderate, significant effect of auriculotherapy in relieving constipation (RR, 1.28; 95% CI, 1.13–1.44; p<0.0001). For other symptoms associated with constipation, such as abdominal distension or anorexia, results of the meta-analyses showed no statistical significance. Subgroup analysis revealed that use of S. vaccariae and use of magnetic pellets were both statistically favored over the control in relieving constipation. Conclusions: Current evidence illustrated that auriculotherapy, a relatively safe strategy, is probably beneficial in managing constipation. However, most of the eligible RCTs had a high risk of bias, and all were conducted in China. No definitive conclusion can be made because of cultural and geographic differences. Further rigorous RCTs from around the world are warranted to confirm the effect and safety of auriculotherapy for constipation. PMID:25020089

  4. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  5. Migration from relational to NoSQL database

    NASA Astrophysics Data System (ADS)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  6. Delayed Instantiation Bulk Operations for Management of Distributed, Object-Based Storage Systems

    DTIC Science & Technology

    2009-08-01

    source and destination object sets, while they have attribute pages to indicate that history . Fourth, we allow for operations to occur on any objects...client dialogue to the PostgreSQL database where server-side functions implement the service logic for the requests. The translation is done...to satisfy client requests, and performs delayed instantiation bulk operations. It is built around a PostgreSQL database with tables for storing

  7. User-oriented views in health care information systems.

    PubMed

    Portoni, Luisa; Combi, Carlo; Pinciroli, Francesco

    2002-12-01

    In this paper, we present the methodology we adopted in designing and developing an object-oriented database system for the management of medical records. The designed system provides technical solutions to important requirements of most clinical information systems, such as 1) the support of tools to create and manage views on data and view schemas, offering to different users specific perspectives on data tailored to their needs; 2) the capability to handle in a suitable way the temporal aspects related to clinical information; and 3) the effective integration of multimedia data. Remote data access for authorized users is also considered. As clinical application, we describe here the prototype of a user-oriented clinical information system for the archiving and the management of multimedia and temporally oriented clinical data related to percutaneous transluminal coronary angioplasty (PTCA) patients. Suitable view schemas for various user roles (cath-lab physician, ward nurse, general practitioner) have been modeled and implemented on the basis of a detailed analysis of the considered clinical environment, carried out by an object-oriented approach.

  8. Large scale database scrubbing using object oriented software components.

    PubMed

    Herting, R L; Barnes, M R

    1998-01-01

    Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.

  9. Work injury management model and implication in Hong Kong: a literature review.

    PubMed

    Chong, Cecilia Suk-Mei; Cheng, Andy Shu-Kei

    2010-01-01

    The objective of this review is to explore the work injury management models in literatures and the essential components in different models. The resulting information could be used to develop an integrated holistic model that could be applied in the work injury management system in Hong Kong. A keyword search of MEDLINE and CINAHL databases was conducted. A total of 68 studies related to the management of an injury were found within the above mentioned electronic database. Together with the citation tracking, there were 13 studies left for selection after the exclusion screening. Only 7 out of those 13 studies met the inclusion criteria for review. It is noticeable that the most important component in the injury management model in the reviewed literatures is early intervention. Because of limitations in Employees' Compensation Ordinance in Hong Kong, there is an impetus to have a model and practice guideline for work injury management in Hong Kong to ensure the quality of injury management services. At the end of this paper, the authors propose a work injury management model based on the employees' compensation system in Hong Kong. This model can be used as a reference for those countries adopting similar legislation as in Hong Kong.

  10. Pragmatic open space box utilization: asteroid survey model using distributed objects management based articulation (DOMBA)

    NASA Astrophysics Data System (ADS)

    Mohammad, Atif Farid; Straub, Jeremy

    2015-05-01

    A multi-craft asteroid survey has significant data synchronization needs. Limited communication speeds drive exacting performance requirements. Tables have been used in Relational Databases, which are structure; however, DOMBA (Distributed Objects Management Based Articulation) deals with data in terms of collections. With this, no read/write roadblocks to the data exist. A master/slave architecture is created by utilizing the Gossip protocol. This facilitates expanding a mission that makes an important discovery via the launch of another spacecraft. The Open Space Box Framework facilitates the foregoing while also providing a virtual caching layer to make sure that continuously accessed data is available in memory and that, upon closing the data file, recharging is applied to the data.

  11. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Advanced Traffic Management Systems (ATMS) research analysis database system

    DOT National Transportation Integrated Search

    2001-06-01

    The ATMS Research Analysis Database Systems (ARADS) consists of a Traffic Software Data Dictionary (TSDD) and a Traffic Software Object Model (TSOM) for application to microscopic traffic simulation and signal optimization domains. The purpose of thi...

  13. Evaluation of linking pavement related databases.

    DOT National Transportation Integrated Search

    2007-03-01

    In general, the objectives of this study were to identify and solve various issues in linking pavement performance related database. The detailed objectives were: to evaluate the state-of-the-art in information technology for data integration and dat...

  14. Managing Content in a Matter of Minutes

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA software created to help scientists expeditiously search and organize their research documents is now aiding compliance personnel, law enforcement investigators, and the general public in their efforts to search, store, manage, and retrieve documents more efficiently. Developed at Ames Research Center, NETMARK software was designed to manipulate vast amounts of unstructured and semi-structured NASA documents. NETMARK is both a relational and object-oriented technology built on an Oracle enterprise-wide database. To ensure easy user access, Ames constructed NETMARK as a Web-enabled platform utilizing the latest in Internet technology. One of the significant benefits of the program was its ability to store and manage mission-critical data.

  15. A novel data storage logic in the cloud

    PubMed Central

    Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice

    2016-01-01

    Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table. PMID:29026521

  16. A novel data storage logic in the cloud.

    PubMed

    Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice

    2016-01-01

    Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.

  17. Implementation of a Distributed Object-Oriented Database Management System

    DTIC Science & Technology

    1989-03-01

    and heuristic algorithms. A method for determining ueit allocation by splitting relations in the conceptual schema base on queries and updates is...level framworks can provide to the user the appearance of many tools to be closely integrated. In particular, the KBSA tools use many high level...development process should begin first with conceptual design of the system. Approximately one month should be used to decide how the new projects

  18. An optimal user-interface for EPIMS database conversions and SSQ 25002 EEE parts screening

    NASA Technical Reports Server (NTRS)

    Watson, John C.

    1996-01-01

    The Electrical, Electronic, and Electromechanical (EEE) Parts Information Management System (EPIMS) database was selected by the International Space Station Parts Control Board for providing parts information to NASA managers and contractors. Parts data is transferred to the EPIMS database by converting parts list data to the EP1MS Data Exchange File Format. In general, parts list information received from contractors and suppliers does not convert directly into the EPIMS Data Exchange File Format. Often parts lists use different variable and record field assignments. Many of the EPES variables are not defined in the parts lists received. The objective of this work was to develop an automated system for translating parts lists into the EPIMS Data Exchange File Format for upload into the EPIMS database. Once EEE parts information has been transferred to the EPIMS database it is necessary to screen parts data in accordance with the provisions of the SSQ 25002 Supplemental List of Qualified Electrical, Electronic, and Electromechanical Parts, Manufacturers, and Laboratories (QEPM&L). The SSQ 2S002 standards are used to identify parts which satisfy the requirements for spacecraft applications. An additional objective for this work was to develop an automated system which would screen EEE parts information against the SSQ 2S002 to inform managers of the qualification status of parts used in spacecraft applications. The EPIMS Database Conversion and SSQ 25002 User Interfaces are designed to interface through the World-Wide-Web(WWW)/Internet to provide accessibility by NASA managers and contractors.

  19. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases

    PubMed Central

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-01-01

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174

  20. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-03-19

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.

  1. Engineering the object-relation database model in O-Raid

    NASA Technical Reports Server (NTRS)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  2. The Watershed and River Systems Management Program: Decision Support for Water- and Environmental-Resource Management

    NASA Astrophysics Data System (ADS)

    Leavesley, G.; Markstrom, S.; Frevert, D.; Fulp, T.; Zagona, E.; Viger, R.

    2004-12-01

    Increasing demands for limited fresh-water supplies, and increasing complexity of water-management issues, present the water-resource manager with the difficult task of achieving an equitable balance of water allocation among a diverse group of water users. The Watershed and River System Management Program (WARSMP) is a cooperative effort between the U.S. Geological Survey (USGS) and the Bureau of Reclamation (BOR) to develop and deploy a database-centered, decision-support system (DSS) to address these multi-objective, resource-management problems. The decision-support system couples the USGS Modular Modeling System (MMS) with the BOR RiverWare tools using a shared relational database. MMS is an integrated system of computer software that provides a research and operational framework to support the development and integration of a wide variety of hydrologic and ecosystem models, and their application to water- and ecosystem-resource management. RiverWare is an object-oriented reservoir and river-system modeling framework developed to provide tools for evaluating and applying water-allocation and management strategies. The modeling capabilities of MMS and Riverware include simulating watershed runoff, reservoir inflows, and the impacts of resource-management decisions on municipal, agricultural, and industrial water users, environmental concerns, power generation, and recreational interests. Forecasts of future climatic conditions are a key component in the application of MMS models to resource-management decisions. Forecast methods applied in MMS include a modified version of the National Weather Service's Extended Streamflow Prediction Program (ESP) and statistical downscaling from atmospheric models. The WARSMP DSS is currently operational in the Gunnison River Basin, Colorado; Yakima River Basin, Washington; Rio Grande Basin in Colorado and New Mexico; and Truckee River Basin in California and Nevada.

  3. United States Air Force Summer Research Program -- 1993. Volume 4. Rome Laboratory

    DTIC Science & Technology

    1993-12-01

    H., eds., Object-Oriented Concepts, Databases , and Applications, Addison-Wesley, Reading, MA, 1989. [Lano9l] Lano, K., "Z++, An Object-Orientated...1433 46.92 60 TCP janus.rl.af.mil mensa.rl.af.mil 1433 2611 The Target Filter Manager responds to requests for data and accesses the target database . A...2.5 2- 1.5- 28 -3 -2 -10 12 3 AZIMUTH (OE(3) Figure 12. Contour plot of antenna pattern, QC2 algorithm 5-32 UPDATING PROBABILISTIC DATABASES Michael A

  4. An Efficient Method for the Retrieval of Objects by Topological Relations in Spatial Database Systems.

    ERIC Educational Resources Information Center

    Lin, P. L.; Tan, W. H.

    2003-01-01

    Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)

  5. Technology for Space Station Evolution: the Data Management System

    NASA Technical Reports Server (NTRS)

    Abbott, L.

    1990-01-01

    Viewgraphs on the data management system (DMS) for the space station evolution are presented. Topics covered include DMS architecture and implementation approach; and an overview of the runtime object database.

  6. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  7. A method to implement fine-grained access control for personal health records through standard relational database queries.

    PubMed

    Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley

    2010-10-01

    Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Database Management: Building, Changing and Using Databases. Collected Papers and Abstracts of the Mid-Year Meeting of the American Society for Information Science (15th, Portland, Oregon, May 1986).

    ERIC Educational Resources Information Center

    American Society for Information Science, Washington, DC.

    This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…

  9. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  10. A web based relational database management system for filariasis control

    PubMed Central

    Murty, Upadhyayula Suryanarayana; Kumar, Duvvuri Venkata Rama Satya; Sriram, Kumaraswamy; Rao, Kadiri Madhusudhan; Bhattacharyulu, Chakravarthula Hayageeva Narasimha Venakata; Praveen, Bhoopathi; Krishna, Amirapu Radha

    2005-01-01

    The present study describes a RDBMS (relational database management system) for the effective management of Filariasis, a vector borne disease. Filariasis infects 120 million people from 83 countries. The possible re-emergence of the disease and the complexity of existing control programs warrant the development of new strategies. A database containing comprehensive data associated with filariasis finds utility in disease control. We have developed a database containing information on the socio-economic status of patients, mosquito collection procedures, mosquito dissection data, filariasis survey report and mass blood data. The database can be searched using a user friendly web interface. Availability http://www.webfil.org (login and password can be obtained from the authors) PMID:17597846

  11. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  12. How I do it: a practical database management system to assist clinical research teams with data collection, organization, and reporting.

    PubMed

    Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe

    2015-04-01

    The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  13. The Design and Implementation of a Relational to Network Query Translator for a Distributed Database Management System.

    DTIC Science & Technology

    1985-12-01

    RELATIONAL TO NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM TH ESI S .L Kevin H. Mahoney -- Captain, USAF AFIT/GCS/ENG/85D-7...NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM - THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Systems - Kevin H. Mahoney

  14. Observational database for studies of nearby universe

    NASA Astrophysics Data System (ADS)

    Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.

    2012-01-01

    We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.

  15. Analysis and preliminary design of Kunming land use and planning management information system

    NASA Astrophysics Data System (ADS)

    Li, Li; Chen, Zhenjie

    2007-06-01

    This article analyzes Kunming land use planning and management information system from the system building objectives and system building requirements aspects, nails down the system's users, functional requirements and construction requirements. On these bases, the three-tier system architecture based on C/S and B/S is defined: the user interface layer, the business logic layer and the data services layer. According to requirements for the construction of land use planning and management information database derived from standards of the Ministry of Land and Resources and the construction program of the Golden Land Project, this paper divides system databases into planning document database, planning implementation database, working map database and system maintenance database. In the design of the system interface, this paper uses various methods and data formats for data transmission and sharing between upper and lower levels. According to the system analysis results, main modules of the system are designed as follows: planning data management, the planning and annual plan preparation and control function, day-to-day planning management, planning revision management, decision-making support, thematic inquiry statistics, planning public participation and so on; besides that, the system realization technologies are discussed from the system operation mode, development platform and other aspects.

  16. [The therapeutic drug monitoring network server of tacrolimus for Chinese renal transplant patients].

    PubMed

    Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei

    2011-07-01

    This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.

  17. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features, automatically extract data and attributes, and simulate unsteady groundwater flow and contaminant transport in response to water and land management decisions; * Visualize and map model simulations and predictions with data from the statewide groundwater database in a seamless interactive environment. IGW-M has the potential to significantly improve the productivity of Michigan groundwater management investigations. It changes the role of engineers and scientists in modeling and analyzing the statewide groundwater database from heavily physical to cognitive problem-solving and decision-making tasks. The seamless real-time integration, real-time visual interaction, and real-time processing capability allows a user to focus on critical management issues, conflicts, and constraints, to quickly and iteratively examine conceptual approximations, management and planning scenarios, and site characterization assumptions, to identify dominant processes, to evaluate data worth and sensitivity, and to guide further data-collection activities. We illustrate the power and effectiveness of the M-IGW modeling and visualization system with a real case study and a real-time, live demonstration.

  18. FOUNTAIN: A JAVA open-source package to assist large sequencing projects

    PubMed Central

    Buerstedde, Jean-Marie; Prill, Florian

    2001-01-01

    Background Better automation, lower cost per reaction and a heightened interest in comparative genomics has led to a dramatic increase in DNA sequencing activities. Although the large sequencing projects of specialized centers are supported by in-house bioinformatics groups, many smaller laboratories face difficulties managing the appropriate processing and storage of their sequencing output. The challenges include documentation of clones, templates and sequencing reactions, and the storage, annotation and analysis of the large number of generated sequences. Results We describe here a new program, named FOUNTAIN, for the management of large sequencing projects . FOUNTAIN uses the JAVA computer language and data storage in a relational database. Starting with a collection of sequencing objects (clones), the program generates and stores information related to the different stages of the sequencing project using a web browser interface for user input. The generated sequences are subsequently imported and annotated based on BLAST searches against the public databases. In addition, simple algorithms to cluster sequences and determine putative polymorphic positions are implemented. Conclusions A simple, but flexible and scalable software package is presented to facilitate data generation and storage for large sequencing projects. Open source and largely platform and database independent, we wish FOUNTAIN to be improved and extended in a community effort. PMID:11591214

  19. EasyKSORD: A Platform of Keyword Search Over Relational Databases

    NASA Astrophysics Data System (ADS)

    Peng, Zhaohui; Li, Jing; Wang, Shan

    Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.

  20. Towards the Interoperability of Web, Database, and Mass Storage Technologies for Petabyte Archives

    NASA Technical Reports Server (NTRS)

    Moore, Reagan; Marciano, Richard; Wan, Michael; Sherwin, Tom; Frost, Richard

    1996-01-01

    At the San Diego Supercomputer Center, a massive data analysis system (MDAS) is being developed to support data-intensive applications that manipulate terabyte sized data sets. The objective is to support scientific application access to data whether it is located at a Web site, stored as an object in a database, and/or storage in an archival storage system. We are developing a suite of demonstration programs which illustrate how Web, database (DBMS), and archival storage (mass storage) technologies can be integrated. An application presentation interface is being designed that integrates data access to all of these sources. We have developed a data movement interface between the Illustra object-relational database and the NSL UniTree archival storage system running in a production mode at the San Diego Supercomputer Center. With this interface, an Illustra client can transparently access data on UniTree under the control of the Illustr DBMS server. The current implementation is based on the creation of a new DBMS storage manager class, and a set of library functions that allow the manipulation and migration of data stored as Illustra 'large objects'. We have extended this interface to allow a Web client application to control data movement between its local disk, the Web server, the DBMS Illustra server, and the UniTree mass storage environment. This paper describes some of the current approaches successfully integrating these technologies. This framework is measured against a representative sample of environmental data extracted from the San Diego Ba Environmental Data Repository. Practical lessons are drawn and critical research areas are highlighted.

  1. Save medical personnel's time by improved user interfaces.

    PubMed

    Kindler, H

    1997-01-01

    Common objectives in the industrial countries are the improvement of quality of care, clinical effectiveness, and cost control. Cost control, in particular, has been addressed through the introduction of case mix systems for reimbursement by social-security institutions. More data is required to enable quality improvement, increases in clinical effectiveness and for juridical reasons. At first glance, this documentation effort is contradictory to cost reduction. However, integrated services for resource management based on better documentation should help to reduce costs. The clerical effort for documentation should be decreased by providing a co-operative working environment for healthcare professionals applying sophisticated human-computer interface technology. Additional services, e.g., automatic report generation, increase the efficiency of healthcare personnel. Modelling the medical work flow forms an essential prerequisite for integrated resource management services and for co-operative user interfaces. A user interface aware of the work flow provides intelligent assistance by offering the appropriate tools at the right moment. Nowadays there is a trend to client/server systems with relational databases or object-oriented databases as repository. The work flows used for controlling purposes and to steer the user interfaces must be represented in the repository.

  2. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  3. Environmental modeling and recognition for an autonomous land vehicle

    NASA Technical Reports Server (NTRS)

    Lawton, D. T.; Levitt, T. S.; Mcconnell, C. C.; Nelson, P. C.

    1987-01-01

    An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given.

  4. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    PubMed

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  5. CDM analysis

    NASA Technical Reports Server (NTRS)

    Larson, Robert E.; Mcentire, Paul L.; Oreilly, John G.

    1993-01-01

    The C Data Manager (CDM) is an advanced tool for creating an object-oriented database and for processing queries related to objects stored in that database. The CDM source code was purchased and will be modified over the course of the Arachnid project. In this report, the modified CDM is referred to as MCDM. Using MCDM, a detailed series of experiments was designed and conducted on a Sun Sparcstation. The primary results and analysis of the CDM experiment are provided in this report. The experiments involved creating the Long-form Faint Source Catalog (LFSC) database and then analyzing it with respect to following: (1) the relationships between the volume of data and the time required to create a database; (2) the storage requirements of the database files; and (3) the properties of query algorithms. The effort focused on defining, implementing, and analyzing seven experimental scenarios: (1) find all sources by right ascension--RA; (2) find all sources by declination--DEC; (3) find all sources in the right ascension interval--RA1, RA2; (4) find all sources in the declination interval--DEC1, DEC2; (5) find all sources in the rectangle defined by--RA1, RA2, DEC1, DEC2; (6) find all sources that meet certain compound conditions; and (7) analyze a variety of query algorithms. Throughout this document, the numerical results obtained from these scenarios are reported; conclusions are presented at the end of the document.

  6. Object-Oriented Approach to Integrating Database Semantics. Volume 4.

    DTIC Science & Technology

    1987-12-01

    schemata for; 1. Object Classification Shema -- Entities 2. Object Structure and Relationship Schema -- Relations 3. Operation Classification and... relationships are represented in a database is non- intuitive for naive users. *It is difficult to access and combine information in multiple databases. In this...from the CURRENT-.CLASSES table. Choosing a selected item do-selects it. Choose 0 to exit. 1. STUDENTS 2. CUR~RENT-..CLASSES 3. MANAGMNT -.CLASS

  7. PACSY, a relational database management system for protein structure and chemical shift analysis.

    PubMed

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  8. Pavement management system for City of Madison.

    DOT National Transportation Integrated Search

    2016-11-01

    This project aims to implement a pavement management system (PMS) for the City of Madison using : four specific objectives: 1) build a city-wide GIS database for PMS compatible and incorporable with the : citys GIS system; 2) identify feasible pav...

  9. Representing metabolic pathway information: an object-oriented approach.

    PubMed

    Ellis, L B; Speedie, S M; McLeish, R

    1998-01-01

    The University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD) is a website providing information and dynamic links for microbial metabolic pathways, enzyme reactions, and their substrates and products. The Compound, Organism, Reaction and Enzyme (CORE) object-oriented database management system was developed to contain and serve this information. CORE was developed using Java, an object-oriented programming language, and PSE persistent object classes from Object Design, Inc. CORE dynamically generates descriptive web pages for reactions, compounds and enzymes, and reconstructs ad hoc pathway maps starting from any UM-BBD reaction. CORE code is available from the authors upon request. CORE is accessible through the UM-BBD at: http://www. labmed.umn.edu/umbbd/index.html.

  10. Development of an electronic database for Acute Pain Service outcomes

    PubMed Central

    Love, Brandy L; Jensen, Louise A; Schopflocher, Donald; Tsui, Ban CH

    2012-01-01

    BACKGROUND: Quality assurance is increasingly important in the current health care climate. An electronic database can be used for tracking patient information and as a research tool to provide quality assurance for patient care. OBJECTIVE: An electronic database was developed for the Acute Pain Service, University of Alberta Hospital (Edmonton, Alberta) to record patient characteristics, identify at-risk populations, compare treatment efficacies and guide practice decisions. METHOD: Steps in the database development involved identifying the goals for use, relevant variables to include, and a plan for data collection, entry and analysis. Protocols were also created for data cleaning quality control. The database was evaluated with a pilot test using existing data to assess data collection burden, accuracy and functionality of the database. RESULTS: A literature review resulted in an evidence-based list of demographic, clinical and pain management outcome variables to include. Time to assess patients and collect the data was 20 min to 30 min per patient. Limitations were primarily software related, although initial data collection completion was only 65% and accuracy of data entry was 96%. CONCLUSIONS: The electronic database was found to be relevant and functional for the identified goals of data storage and research. PMID:22518364

  11. A spatiotemporal data model for incorporating time in geographic information systems (GEN-STGIS)

    NASA Astrophysics Data System (ADS)

    Narciso, Flor Eugenia

    Temporal Geographic Information Systems (TGIS) is a new technology, which is being developed to work with Geographic Information Systems (GIS) that deal with geographic phenomena that change over time. The capabilities of TGIS depend on the underlying data model. However, a literature review of current spatiotemporal GIS data models has shown that they are not adequate for managing time when representing temporal data. In addition, the majority of these data models have been designed to support the requirements of specific-purpose applications. In an effort to resolve this problem, the related literature has been explored. A comparative investigation of the current spatiotemporal GIS data models has been made to identify their characteristics, advantages and disadvantages, similarities and differences, and to determine why they do not work adequately. A new object-oriented General-purpose Spatiotemporal GIS (GEN-STGIS) data model is proposed here. This model provides better representation, storage and management of data related to geographic phenomena that change over time and overcomes some of the problems detected in the reviewed data models. The proposed data model has four key benefits. First, it provides the capabilities of a standard vector-based GIS embedded in the 2-D Euclidean space. Second, it includes the two temporal dimensions, valid time and transaction time, supported by temporal databases. Third, it inherits, from the object oriented approach, the flexibility, modularity and ability to handle the complexities introduced by spatial and temporal dimensions. Fourth, it improves the geographic query capabilities of current TGIS with the introduction of the concept of bounding box while providing temporal and spatiotemporal query capabilities. The data model is then evaluated in order to assess its strengths and weaknesses as a spatiotemporal GIS data model, and to determine how well the model satisfies the requirements imposed by TGIS applications. The practicality of the data model is demonstrated by the creation of a TGIS example and the partial implementation of the model using the POET Java software for developing the object-oriented database. the object-oriented database.

  12. Graph Databases for Large-Scale Healthcare Systems: A Framework for Efficient Data Management and Data Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Yubin; Shankar, Mallikarjun; Park, Byung H.

    Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less

  13. Providing the Persistent Data Storage in a Software Engineering Environment Using Java/COBRA and a DBMS

    NASA Technical Reports Server (NTRS)

    Dhaliwal, Swarn S.

    1997-01-01

    An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA object providing the persistent data server is implemented using the Java progranu-ning language. It uses the JRB to store/retrieve data in/from a relational database server. The persistent data management system provides transaction and user management facilities which allow multi-user, distributed access to the stored data in a secure manner.

  14. Effective spatial database support for acquiring spatial information from remote sensing images

    NASA Astrophysics Data System (ADS)

    Jin, Peiquan; Wan, Shouhong; Yue, Lihua

    2009-12-01

    In this paper, a new approach to maintain spatial information acquiring from remote-sensing images is presented, which is based on Object-Relational DBMS. According to this approach, the detected and recognized results of targets are stored and able to be further accessed in an ORDBMS-based spatial database system, and users can access the spatial information using the standard SQL interface. This approach is different from the traditional ArcSDE-based method, because the spatial information management module is totally integrated into the DBMS and becomes one of the core modules in the DBMS. We focus on three issues, namely the general framework for the ORDBMS-based spatial database system, the definitions of the add-in spatial data types and operators, and the process to develop a spatial Datablade on Informix. The results show that the ORDBMS-based spatial database support for image-based target detecting and recognition is easy and practical to be implemented.

  15. Functions and Relations: Some Applications from Database Management for the Teaching of Classroom Mathematics.

    ERIC Educational Resources Information Center

    Hauge, Sharon K.

    While functions and relations are important concepts in the teaching of mathematics, research suggests that many students lack an understanding and appreciation of these concepts. The present paper discusses an approach for teaching functions and relations that draws on the use of illustrations from database management. This approach has the…

  16. Relational Database Design in Information Science Education.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  17. Graphical tool for navigation within the semantic network of the UMLS metathesaurus on a locally installed database.

    PubMed

    Frankewitsch, T; Prokosch, H U

    2000-01-01

    Knowledge in the environment of information technologies is bound to structured vocabularies. Medical data dictionaries are necessary for uniquely describing findings like diagnoses, procedures or functions. Therefore we decided to locally install a version of the Unified Medical Language System (UMLS) of the U.S. National Library of Medicine as a repository for defining entries of a medical multimedia database. Because of the requirement to extend the vocabulary in concepts and relations between existing concepts a graphical tool for appending new items to the database has been developed: Although the database is an instance of a semantic network the focus on single entries offers the opportunity of reducing the net to a tree within this detail. Based on the graph theorem, there are definitions of nodes of concepts and nodes of knowledge. The UMLS additionally offers the specification of sub-relations, which can be represented, too. Using this view it is possible to manage these 1:n-Relations in a simple tree view. On this background an explorer like graphical user interface has been realised to add new concepts and define new relationships between those and existing entries for adapting the UMLS for specific purposes such as describing medical multimedia objects.

  18. Serials Management by Microcomputer: The Potential of DBMS.

    ERIC Educational Resources Information Center

    Vogel, J. Thomas; Burns, Lynn W.

    1984-01-01

    Describes serials management at Philadelphia College of Textiles and Science library via a microcomputer, a file manager called PFS, and a relational database management system called dBase II. Check-in procedures, programing with dBase II, "static" and "active" databases, and claim procedures are discussed. Check-in forms are…

  19. Continuity of care to optimize chronic disease management in the community setting: an evidence-based analysis.

    PubMed

    2013-01-01

    This evidence-based analysis reviews relational and management continuity of care. Relational continuity refers to the duration and quality of the relationship between the care provider and the patient. Management continuity ensures that patients receive coherent, complementary, and timely care. There are 4 components of continuity of care: duration, density, dispersion, and sequence. The objective of this evidence-based analysis was to determine if continuity of care is associated with decreased health resource utilization, improved patient outcomes, and patient satisfaction. MEDLINE, EMBASE, CINAHL, the Cochrane Library, and the Centre for Reviews and Dissemination database were searched for studies on continuity of care and chronic disease published from January 2002 until December 2011. Systematic reviews, randomized controlled trials, and observational studies were eligible if they assessed continuity of care in adults and reported health resource utilization, patient outcomes, or patient satisfaction. Eight systematic reviews and 13 observational studies were identified. The reviews concluded that there is an association between continuity of care and outcomes; however, the literature base is weak. The observational studies found that higher continuity of care was frequently associated with fewer hospitalizations and emergency department visits. Three systematic reviews reported that higher continuity of care is associated with improved patient satisfaction, especially among patients with chronic conditions. Most of the studies were retrospective cross-sectional studies of large administrative databases. The databases do not capture information on trust and confidence in the provider, which is a critical component of relational continuity of care. The definitions for the selection of patients from the databases varied across studies. There is low quality evidence that: Higher continuity of care is associated with decreased health service utilization.There is insufficient evidence on the relationship of continuity of care with disease-specific outcomes.There is an association between high continuity of care and patient satisfaction, particularly among patients with chronic diseases.

  20. Catalogue of HI PArameters (CHIPA)

    NASA Astrophysics Data System (ADS)

    Saponara, J.; Benaglia, P.; Koribalski, B.; Andruchow, I.

    2015-08-01

    The catalogue of HI parameters of galaxies HI (CHIPA) is the natural continuation of the compilation by M.C. Martin in 1998. CHIPA provides the most important parameters of nearby galaxies derived from observations of the neutral Hydrogen line. The catalogue contains information of 1400 galaxies across the sky and different morphological types. Parameters like the optical diameter of the galaxy, the blue magnitude, the distance, morphological type, HI extension are listed among others. Maps of the HI distribution, velocity and velocity dispersion can also be display for some cases. The main objective of this catalogue is to facilitate the bibliographic queries, through searching in a database accessible from the internet that will be available in 2015 (the website is under construction). The database was built using the open source `` mysql (SQL, Structured Query Language, management system relational database) '', while the website was built with ''HTML (Hypertext Markup Language)'' and ''PHP (Hypertext Preprocessor)''.

  1. Database technology and the management of multimedia data in the Mirror project

    NASA Astrophysics Data System (ADS)

    de Vries, Arjen P.; Blanken, H. M.

    1998-10-01

    Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.

  2. PACSY, a relational database management system for protein structure and chemical shift analysis

    PubMed Central

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  3. Assessing organizational performance in intensive care units: a French experience.

    PubMed

    Minvielle, Etienne; Aegerter, Philippe; Dervaux, Benoît; Boumendil, Ariane; Retbi, Aurélia; Jars-Guincestre, Marie Claude; Guidet, Bertrand

    2008-06-01

    The objective of the study was to assess and to explain variation of organizational performance in intensive care units (ICUs). This was a prospective multicenter study. The study involved 26 ICUs located in the Paris area, France, participating in a regional database. Data were collected through answers of 1000 ICU personnel to the Culture, Organization, and Management in Intensive Care questionnaire and from the database. Organizational performance was assessed through a composite score related to 5 dimensions: coordination and adaptation to uncertainty, communication, conflict management, organizational change, and organizational learning, Skills developed in relationship with patients and their families. Statistical comparisons between ICUs were performed by analysis of variance with a Scheffé pairwise procedure. A multilevel regression model was used to analyze both individual and structural variables explaining differences of ICU's organizational performance. The organizational performance score differed among ICUs. Some cultural values were negatively correlated with a high level of organizational performance, suggesting improvement potential. Several individual and structural factors were also related to the quality of ICU organization, including absence of burnout, older staff, satisfaction to work, and high workload (P < .02 for each). A benchmarking approach can be used by ICU managers to assess the organizational performance of their ICU based on a validated questionnaire. Differences are mainly explained by cultural values and individual well-being factors, introducing new requirements for managing human resources in ICUs.

  4. Methods and apparatus for constructing and implementing a universal extension module for processing objects in a database

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)

    2004-01-01

    Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.

  5. Soil Organic Carbon for Global Benefits - assessing potential SOC increase under SLM technologies worldwide and evaluating tradeoffs and gains of upscaling SLM technologies

    NASA Astrophysics Data System (ADS)

    Wolfgramm, Bettina; Hurni, Hans; Liniger, Hanspeter; Ruppen, Sebastian; Milne, Eleanor; Bader, Hans-Peter; Scheidegger, Ruth; Amare, Tadele; Yitaferu, Birru; Nazarmavloev, Farrukh; Conder, Malgorzata; Ebneter, Laura; Qadamov, Aslam; Shokirov, Qobiljon; Hergarten, Christian; Schwilch, Gudrun

    2013-04-01

    There is a fundamental mutual interest between enhancing soil organic carbon (SOC) in the world's soils and the objectives of the major global environmental conventions (UNFCCC, UNCBD, UNCCD). While there is evidence at the case study level that sustainable land management (SLM) technologies increase SOC stocks and SOC related benefits, there is no quantitative data available on the potential for increasing SOC benefits from different SLM technologies and especially from case studies in the developing countries, and a clear understanding of the trade-offs related to SLM up-scaling is missing. This study aims at assessing the potential increase of SOC under SLM technologies worldwide, evaluating tradeoffs and gains in up-scaling SLM for case studies in Tajikistan, Ethiopia and Switzerland. It makes use of the SLM technologies documented in the online database of the World Overview of Conservation Approaches and Technologies (WOCAT). The study consists of three components: 1) Identifying SOC benefits contributing to the major global environmental issues for SLM technologies worldwide as documented in the WOCAT global database 2) Validation of SOC storage potentials and SOC benefit predictions for SLM technologies from the WOCAT database using results from existing comparative case studies at the plot level, using soil spectral libraries and standardized documentations of ecosystem service from the WOCAT database. 3) Understanding trade-offs and win-win scenarios of up-scaling SLM technologies from the plot to the household and landscape level using material flow analysis. This study builds on the premise that the most promising way to increase benefits from land management is to consider already existing sustainable strategies. Such SLM technologies from all over the world documented are accessible in a standardized way in the WOCAT online database. The study thus evaluates SLM technologies from the WOCAT database by calculating the potential SOC storage increase and related benefits by comparing SOC estimates before-and-after establishment of the SLM technology. These results are validated using comparative case studies of plots with-and-without SLM technologies (existing SLM systems versus surrounding, degrading systems). In view of upscaling SLM technologies, it is crucial to understand tradeoffs and gains supporting or hindering the further spread. Systemic biomass management analysis using material flow analysis allows quantifying organic carbon flows and storages for different land management options at the household, but also at landscape level. The study shows results relevant for science, policy and practice for accounting, monitoring and evaluating SOC related ecosystem services: - A comprehensive methodology for SLM impact assessments allowing quantification of SOC storage and SOC related benefits under different SLM technologies, and - Improved understanding of upscaling options for SLM technologies and tradeoffs as well as win-win opportunities for biomass management, SOC content increase, and ecosystem services improvement at the plot and household level.

  6. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  7. Managing Heterogeneous Information Systems through Discovery and Retrieval of Generic Concepts.

    ERIC Educational Resources Information Center

    Srinivasan, Uma; Ngu, Anne H. H.; Gedeon, Tom

    2000-01-01

    Introduces a conceptual integration approach to heterogeneous databases or information systems that exploits the similarity in metalevel information and performs metadata mining on database objects to discover a set of concepts that serve as a domain abstraction and provide a conceptual layer above existing legacy systems. Presents results of…

  8. A hierarchical spatial framework and database for the national river fish habitat condition assessment

    USGS Publications Warehouse

    Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.

    2011-01-01

    Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.

  9. Wet weather highway accident analysis and skid resistance data management system (volume I).

    DOT National Transportation Integrated Search

    1992-06-01

    The objectives and scope of this research are to establish an effective methodology for wet weather accident analysis and to develop a database management system to facilitate information processing and storage for the accident analysis process, skid...

  10. A structured interface to the object-oriented genomics unified schema for XML-formatted data.

    PubMed

    Clark, Terry; Jurek, Josef; Kettler, Gregory; Preuss, Daphe

    2005-01-01

    Data management systems are fast becoming required components in many biology laboratories as the role of computer-based information grows. Although the need for data management systems is on the rise, their inherent complexities can deter the full and routine use of their computational capabilities. The significant undertaking to implement a capable production system can be reduced in part by adapting an established data management system. In such a way, we are leveraging the Genomics Unified Schema (GUS) developed at the Computational Biology and Informatics Laboratory at the University of Pennsylvania as a foundation for managing and analysing DNA sequence data in centromere research projects around Arabidopsis thaliana and related species. Because GUS provides a core schema that includes support for genome sequences, mRNA and its expression, and annotated chromosomes, it is ideal for synthesising a variety of parameters to analyse these repetitive and highly dynamic portions of the genome. Despite this, production-strength data management frameworks are complex, requiring dedicated efforts to adapt and maintain. The work reported in this article addresses one component of such an effort, namely the pivotal task of marshalling data from various sources into GUS. In order to harness GUS for our project, and motivated by efficiency needs, we developed a structured framework for transferring data into GUS from outside sources. This technology is embodied in a GUS object-layer processor, XMLGUS. XMLGUS facilitates incorporating data into GUS by (i) formulating an XML interface that includes relational database key constraint definitions, (ii) regularising traversal through that XML, (iii) realising automatic processing of the XML with database key constraints and (iv) allowing for special processing of input data within the framework for automated processing. The application of XMLGUS to production pipeline processing for a sequencing project and inputting the Arabidopsis genome into GUS is discussed. XMLGUS is available from the Flora website (http://flora.ittc.ku.edu/).

  11. Microcomputer-Based Access to Machine-Readable Numeric Databases.

    ERIC Educational Resources Information Center

    Wenzel, Patrick

    1988-01-01

    Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)

  12. Database Management System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  13. SSME environment database development

    NASA Technical Reports Server (NTRS)

    Reardon, John

    1987-01-01

    The internal environment of the Space Shuttle Main Engine (SSME) is being determined from hot firings of the prototype engines and from model tests using either air or water as the test fluid. The objectives are to develop a database system to facilitate management and analysis of test measurements and results, to enter available data into the the database, and to analyze available data to establish conventions and procedures to provide consistency in data normalization and configuration geometry references.

  14. The Modular Modeling System (MMS): A modeling framework for water- and environmental-resources management

    USGS Publications Warehouse

    Leavesley, G.H.; Markstrom, S.L.; Viger, R.J.

    2004-01-01

    The interdisciplinary nature and increasing complexity of water- and environmental-resource problems require the use of modeling approaches that can incorporate knowledge from a broad range of scientific disciplines. The large number of distributed hydrological and ecosystem models currently available are composed of a variety of different conceptualizations of the associated processes they simulate. Assessment of the capabilities of these distributed models requires evaluation of the conceptualizations of the individual processes, and the identification of which conceptualizations are most appropriate for various combinations of criteria, such as problem objectives, data constraints, and spatial and temporal scales of application. With this knowledge, "optimal" models for specific sets of criteria can be created and applied. The U.S. Geological Survey (USGS) Modular Modeling System (MMS) is an integrated system of computer software that has been developed to provide these model development and application capabilities. MMS supports the integration of models and tools at a variety of levels of modular design. These include individual process models, tightly coupled models, loosely coupled models, and fully-integrated decision support systems. A variety of visualization and statistical tools are also provided. MMS has been coupled with the Bureau of Reclamation (BOR) object-oriented reservoir and river-system modeling framework, RiverWare, under a joint USGS-BOR program called the Watershed and River System Management Program. MMS and RiverWare are linked using a shared relational database. The resulting database-centered decision support system provides tools for evaluating and applying optimal resource-allocation and management strategies to complex, operational decisions on multipurpose reservoir systems and watersheds. Management issues being addressed include efficiency of water-resources management, environmental concerns such as meeting flow needs for endangered species, and optimizing operations within the constraints of multiple objectives such as power generation, irrigation, and water conservation. This decision support system approach is being developed, tested, and implemented in the Gunni-son, Yakima, San Juan, Rio Grande, and Truckee River basins of the western United States. Copyright ASCE 2004.

  15. Wet weather highway accident analysis and skid resistance data management system (volume II : user's manual).

    DOT National Transportation Integrated Search

    1992-06-01

    The objectives and scope of this research are to establish an effective methodology for wet weather accident analysis and to develop a database management system to facilitate information processing and storage for the accident analysis process, skid...

  16. Integrating Query of Relational and Textual Data in Clinical Databases: A Case Study

    PubMed Central

    Fisk, John M.; Mutalik, Pradeep; Levin, Forrest W.; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    Objectives: The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Design: Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Measurements: Results are relevance-ranked using either “total documents per patient” or “report type weighting.” Results: Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. Conclusion: A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers. PMID:12509355

  17. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  18. Object-oriented analysis and design of an ECG storage and retrieval system integrated with an HIS.

    PubMed

    Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S

    1996-03-01

    For a hospital information system, object-oriented methodology plays an increasingly important role, especially for the management of digitized data, e.g., the electrocardiogram, electroencephalogram, electromyogram, spirogram, X-ray, CT and histopathological images, which are not yet computerized in most hospitals. As a first step in an object-oriented approach to hospital information management and storing medical data in an object-oriented database, we connected electrocardiographs to a hospital network and established the integration of ECG storage and retrieval systems with a hospital information system. In this paper, the object-oriented analysis and design of the ECG storage and retrieval systems is reported.

  19. Reliability database development for use with an object-oriented fault tree evaluation program

    NASA Technical Reports Server (NTRS)

    Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann

    1989-01-01

    A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.

  20. Selecting a Persistent Data Support Environment for Object-Oriented Applications

    DTIC Science & Technology

    1998-03-01

    key features of most object DBMS products is contained in the <DWAS 9{eeds Assessment for Objects from Barry and Associates. The developer should...data structure and behavior in a self- contained module enhances maintainability of the system and promotes reuse of modules for similar domains...considered together, represent a survey of commercial object-oriented database management systems. These references contain detailed information needed

  1. Abstraction of the Relational Model from a Department of Veterans Affairs DHCP Database: Bridging Theory and Working Application

    PubMed Central

    Levy, C.; Beauchamp, C.

    1996-01-01

    This poster describes the methods used and working prototype that was developed from an abstraction of the relational model from the VA's hierarchical DHCP database. Overlaying the relational model on DHCP permits multiple user views of the physical data structure, enhances access to the database by providing a link to commercial (SQL based) software, and supports a conceptual managed care data model based on primary and longitudinal patient care. The goal of this work was to create a relational abstraction of the existing hierarchical database; to construct, using SQL data definition language, user views of the database which reflect the clinical conceptual view of DHCP, and to allow the user to work directly with the logical view of the data using GUI based commercial software of their choosing. The workstation is intended to serve as a platform from which a managed care information model could be implemented and evaluated.

  2. Sting_RDB: a relational database of structural parameters for protein analysis with support for data warehousing and data mining.

    PubMed

    Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G

    2007-10-05

    An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

  3. Parent and Family Processes Related to ADHD Management in Ethnically Diverse Youth

    PubMed Central

    Paidipati, Cynthia P.; Brawner, Bridgette; Eiraldi, Ricardo; Deatrick, Janet A.

    2017-01-01

    BACKGROUND Previous research has shown major disparities in attention deficit hyperactivity disorder (ADHD) for diverse youth across America. We do not fully understand, however, how parent and family processes are related to the identification, care-seeking approaches, treatment preferences, and engagement with care systems and services for youth with ADHD. OBJECTIVES The present study aimed to explore parent and family processes related to the management of ADHD in racially and ethnically diverse youth. DESIGN This integrative review was structured with the methodology proposed by Whittemore and Knafl. RESULTS Three major electronic databases yielded a final sample of 32 articles (24 quantitative, 6 qualitative, and 2 mixed methods). Nine themes emerged within three overarching meta-themes. CONCLUSIONS Understanding the unique perspectives of families from diverse backgrounds is essential for clinicians, researchers, and policymakers, who are dedicated to understanding racial and ethnic perspectives and developing ecologically appropriate and family-based interventions for youth with ADHD. PMID:28076687

  4. Development of Integrated Information System for Travel Bureau Company

    NASA Astrophysics Data System (ADS)

    Karma, I. G. M.; Susanti, J.

    2018-01-01

    Related to the effectiveness of decision-making by the management of travel bureau company, especially by managers, information serves frequent delays or incomplete. Although already computer-assisted, the existing application-based is used only handle one particular activity only, not integrated. This research is intended to produce an integrated information system that handles the overall operational activities of the company. By applying the object-oriented system development approach, the system is built with Visual Basic. Net programming language and MySQL database package. The result is a system that consists of 4 (four) separated program packages, including Reservation System, AR System, AP System and Accounting System. Based on the output, we can conclude that this system is able to produce integrated information that related to the problem of reservation, operational and financial those produce up-to-date information in order to support operational activities and decisionmaking process by related parties.

  5. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2002-08-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, and relational databases, as well as ACeDB. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system.

  6. Dragon pulse information management system (DPIMS): A unique model-based approach to implementing domain agnostic system of systems and behaviors

    NASA Astrophysics Data System (ADS)

    Anderson, Thomas S.

    2016-05-01

    The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.

  7. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    NASA Astrophysics Data System (ADS)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  8. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  9. Kaizen newspaper

    NASA Technical Reports Server (NTRS)

    Shearer, Scott C. (Inventor); Proferes, John Nicholas (Inventor); Baker, Sr., Mitchell D. (Inventor); Reilly, Kenneth B. (Inventor); Tiwari, Vijai K. (Inventor)

    2013-01-01

    Systems, computer program products, and methods are disclosed for tracking an improvement event. An embodiment includes an event interface configured to receive a plurality of entries related to each of a plurality of improvement events. The plurality of entries includes a project identifier for the improvement event, a creation date, an objective, an action related to reaching the objective, and a first deadline related to the improvement event. A database interface is configured to store the plurality of entries in an event database.

  10. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  11. Task 21 - Development of Systems Engineering Applications for Decontamination and Decommissioning Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, T.A.

    1998-11-01

    The objectives of this task are to: Develop a model (paper) to estimate the cost and waste generation of cleanup within the Environmental Management (EM) complex; Identify technologies applicable to decontamination and decommissioning (D and D) operations within the EM complex; Develop a database of facility information as linked to project baseline summaries (PBSs). The above objectives are carried out through the following four subtasks: Subtask 1--D and D Model Development, Subtask 2--Technology List; Subtask 3--Facility Database, and Subtask 4--Incorporation into a User Model.

  12. Creating Your Own Database.

    ERIC Educational Resources Information Center

    Blair, John C., Jr.

    1982-01-01

    Outlines the important factors to be considered in selecting a database management system for use with a microcomputer and presents a series of guidelines for developing a database. General procedures, report generation, data manipulation, information storage, word processing, data entry, database indexes, and relational databases are among the…

  13. Improving Recall Using Database Management Systems: A Learning Strategy.

    ERIC Educational Resources Information Center

    Jonassen, David H.

    1986-01-01

    Describes the use of microcomputer database management systems to facilitate the instructional uses of learning strategies relating to information processing skills, especially recall. Two learning strategies, cross-classification matrixing and node acquisition and integration, are highlighted. (Author/LRW)

  14. Using Online Databases in Corporate Issues Management.

    ERIC Educational Resources Information Center

    Thomsen, Steven R.

    1995-01-01

    Finds that corporate public relations practitioners felt they were able, using online database and information services, to intercept issues earlier in the "issue cycle" and thus enable their organizations to develop more "proactionary" or "catalytic" issues management repose strategies. (SR)

  15. [Strategies for sustainable management of commensal rodents. Definitions of control objectives at communal level].

    PubMed

    Plenge-Bönig, A; Schmolz, E

    2014-05-01

    The German Act on the Prevention and Control of Infectious Diseases in Man (Infektionsschutzgesetz, IfSG) provides a legal framework for activities and responsibilities concerning communal rodent control. However, actual governance of communal rodent control is relatively heterogeneous, as federal states (Bundesländer) have different or even no regulations for prevention and management of commensal rodent infestations (e.g. brown rats, roof rats and house mice). Control targets and control requirements are rarely precisely defined and often do not go beyond general measures and objectives. Although relevant regulations provide information about agreed preventive measures against rodents, the concept of sustainability is not expressed as such. A centrally managed database-supported municipal rodent control is a key factor for sustainability because it allows a systematic and analytical approach to identify and reduce rodent populations. The definition of control objectives and their establishment in legal decrees is mandatory for the implementation of a sustainable management strategy of rodent populations at a local level. Systematic recording of rodent infestations through municipal-operated monitoring provides the essential data foundation for a targeted rodent management which is already implemented in some German and European cities and nationwide in Denmark. A sustainable rodent management includes a more targeted rodenticide application which in the long-term will lead to an overall reduction of rodenticide use. Thus, the benefits of sustainable rodent management will be a reduction of rodenticide exposure to the environment, prevention of resistance and long-term economical savings.

  16. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    NASA Astrophysics Data System (ADS)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.

  17. Cruella: developing a scalable tissue microarray data management system.

    PubMed

    Cowan, James D; Rimm, David L; Tuck, David P

    2006-06-01

    Compared with DNA microarray technology, relatively little information is available concerning the special requirements, design influences, and implementation strategies of data systems for tissue microarray technology. These issues include the requirement to accommodate new and different data elements for each new project as well as the need to interact with pre-existing models for clinical, biological, and specimen-related data. To design and implement a flexible, scalable tissue microarray data storage and management system that could accommodate information regarding different disease types and different clinical investigators, and different clinical investigation questions, all of which could potentially contribute unforeseen data types that require dynamic integration with existing data. The unpredictability of the data elements combined with the novelty of automated analysis algorithms and controlled vocabulary standards in this area require flexible designs and practical decisions. Our design includes a custom Java-based persistence layer to mediate and facilitate interaction with an object-relational database model and a novel database schema. User interaction is provided through a Java Servlet-based Web interface. Cruella has become an indispensable resource and is used by dozens of researchers every day. The system stores millions of experimental values covering more than 300 biological markers and more than 30 disease types. The experimental data are merged with clinical data that has been aggregated from multiple sources and is available to the researchers for management, analysis, and export. Cruella addresses many of the special considerations for managing tissue microarray experimental data and the associated clinical information. A metadata-driven approach provides a practical solution to many of the unique issues inherent in tissue microarray research, and allows relatively straightforward interoperability with and accommodation of new data models.

  18. Study on parallel and distributed management of RS data based on spatial database

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  19. Development of a Relational Database for Learning Management Systems

    ERIC Educational Resources Information Center

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  20. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    ERIC Educational Resources Information Center

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  1. The relational clinical database: a possible solution to the star wars in registry systems.

    PubMed

    Michels, D K; Zamieroski, M

    1990-12-01

    In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

  2. Real-Time Payload Control and Monitoring on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)

    1998-01-01

    World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.

  3. Using Virtual Servers to Teach the Implementation of Enterprise-Level DBMSs: A Teaching Note

    ERIC Educational Resources Information Center

    Wagner, William P.; Pant, Vik

    2010-01-01

    One of the areas where demand has remained strong for MIS students is in the area of database management. Since the early days, this topic has been a mainstay in the MIS curriculum. Students of database management today typically learn about relational databases, SQL, normalization, and how to design and implement various kinds of database…

  4. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  5. Fuels planning: science synthesis and integration; economic uses fact sheet 08: prescribed fire costs

    Treesearch

    Rocky Mountain Research Station USDA Forest Service

    2004-01-01

    Although the use of prescribed fire as a management tool is widespread, there is great variability and uncertainty in the treatment costs. Given specific site variables and management objectives, how much will it cost to use prescribed fire? This paper describes the FASTRACS database, a tool that has been developed to aid managers in addressing this question.

  6. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    NASA Astrophysics Data System (ADS)

    Viegas, F.; Malon, D.; Cranshaw, J.; Dimitrov, G.; Nowak, M.; Nairz, A.; Goossens, L.; Gallas, E.; Gamboa, C.; Wong, A.; Vinek, E.

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  7. SU-G-TeP4-06: An Integrated Application for Radiation Therapy Treatment Plan Directives, Management, and Reporting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matuszak, M; Anderson, C; Lee, C

    Purpose: With electronic medical records, patient information for the treatment planning process has become disseminated across multiple applications with limited quality control and many associated failure modes. We present the development of a single application with a centralized database to manage the planning process. Methods: The system was designed to replace current functionalities of (i) static directives representing the physician intent for the prescription and planning goals, localization information for delivery, and other information, (ii) planning objective reports, (iii) localization and image guidance documents and (iv) the official radiation therapy prescription in the medical record. Using the Eclipse Scripting Applicationmore » Programming Interface, a plug-in script with an associated domain-specific SQL Server database was created to manage the information in (i)–(iv). The system’s user interface and database were designed by a team of physicians, clinical physicists, database experts, and software engineers to ensure usability and robustness for clinical use. Results: The resulting system has been fully integrated within the TPS via a custom script and database. Planning scenario templates, version control, approvals, and logic-based quality control allow this system to fully track and document the planning process as well as physician approval of tradeoffs while improving the consistency of the data. Multiple plans and prescriptions are supported along with non-traditional dose objectives and evaluation such as biologically corrected models, composite dose limits, and management of localization goals. User-specific custom views were developed for the attending physician review, physicist plan checks, treating therapists, and peer review in chart rounds. Conclusion: A method was developed to maintain cohesive information throughout the planning process within one integrated system by using a custom treatment planning management application that interfaces directly with the TPS. Future work includes quantifying the improvements in quality, safety and efficiency that are possible with the routine clinical use of this system. Supported in part by NIH-P01-CA-059827.« less

  8. GIS based solid waste management information system for Nagpur, India.

    PubMed

    Vijay, Ritesh; Jain, Preeti; Sharma, N; Bhattacharyya, J K; Vaidya, A N; Sohony, R A

    2013-01-01

    Solid waste management is one of the major problems of today's world and needs to be addressed by proper utilization of technologies and design of effective, flexible and structured information system. Therefore, the objective of this paper was to design and develop a GIS based solid waste management information system as a decision making and planning tool for regularities and municipal authorities. The system integrates geo-spatial features of the city and database of existing solid waste management. GIS based information system facilitates modules of visualization, query interface, statistical analysis, report generation and database modification. It also provides modules like solid waste estimation, collection, transportation and disposal details. The information system is user-friendly, standalone and platform independent.

  9. Software Tools Streamline Project Management

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Three innovative software inventions from Ames Research Center (NETMARK, Program Management Tool, and Query-Based Document Management) are finding their way into NASA missions as well as industry applications. The first, NETMARK, is a program that enables integrated searching of data stored in a variety of databases and documents, meaning that users no longer have to look in several places for related information. NETMARK allows users to search and query information across all of these sources in one step. This cross-cutting capability in information analysis has exponentially reduced the amount of time needed to mine data from days or weeks to mere seconds. NETMARK has been used widely throughout NASA, enabling this automatic integration of information across many documents and databases. NASA projects that use NETMARK include the internal reporting system and project performance dashboard, Erasmus, NASA s enterprise management tool, which enhances organizational collaboration and information sharing through document routing and review; the Integrated Financial Management Program; International Space Station Knowledge Management; Mishap and Anomaly Information Reporting System; and management of the Mars Exploration Rovers. Approximately $1 billion worth of NASA s projects are currently managed using Program Management Tool (PMT), which is based on NETMARK. PMT is a comprehensive, Web-enabled application tool used to assist program and project managers within NASA enterprises in monitoring, disseminating, and tracking the progress of program and project milestones and other relevant resources. The PMT consists of an integrated knowledge repository built upon advanced enterprise-wide database integration techniques and the latest Web-enabled technologies. The current system is in a pilot operational mode allowing users to automatically manage, track, define, update, and view customizable milestone objectives and goals. The third software invention, Query-Based Document Management (QBDM) is a tool that enables content or context searches, either simple or hierarchical, across a variety of databases. The system enables users to specify notification subscriptions where they associate "contexts of interest" and "events of interest" to one or more documents or collection(s) of documents. Based on these subscriptions, users receive notification when the events of interest occur within the contexts of interest for associated document or collection(s) of documents. Users can also associate at least one notification time as part of the notification subscription, with at least one option for the time period of notifications.

  10. Development of the ageing management database of PUSPATI TRIGA reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramli, Nurhayati, E-mail: nurhayati@nm.gov.my; Tom, Phongsakorn Prak; Husain, Nurfazila

    Since its first criticality in 1982, PUSPATI TRIGA Reactor (RTP) has been operated for more than 30 years. As RTP become older, ageing problems have been seen to be the prominent issues. In addressing the ageing issues, an Ageing Management (AgeM) database for managing related ageing matters was systematically developed. This paper presents the development of AgeM database taking into account all RTP major Systems, Structures and Components (SSCs) and ageing mechanism of these SSCs through the system surveillance program.

  11. Relational Data Bases--Are You Ready?

    ERIC Educational Resources Information Center

    Marshall, Dorothy M.

    1989-01-01

    Migrating from a traditional to a relational database technology requires more than traditional project management techniques. An overview of what to consider before migrating to relational database technology is presented. Leadership, staffing, vendor support, hardware, software, and application development are discussed. (MLW)

  12. Nonmaterialized Relations and the Support of Information Retrieval Applications by Relational Database Systems.

    ERIC Educational Resources Information Center

    Lynch, Clifford A.

    1991-01-01

    Describes several aspects of the problem of supporting information retrieval system query requirements in the relational database management system (RDBMS) environment and proposes an extension to query processing called nonmaterialized relations. User interactions with information retrieval systems are discussed, and nonmaterialized relations are…

  13. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  14. OWLing Clinical Data Repositories With the Ontology Web Language

    PubMed Central

    Pastor, Xavier; Lozano, Esther

    2014-01-01

    Background The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. Objective The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. Methods We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Results Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. Conclusions OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems. PMID:25599697

  15. The Politics of Information: Building a Relational Database To Support Decision-Making at a Public University.

    ERIC Educational Resources Information Center

    Friedman, Debra; Hoffman, Phillip

    2001-01-01

    Describes creation of a relational database at the University of Washington supporting ongoing academic planning at several levels and affecting the culture of decision making. Addresses getting started; sharing the database; questions, worries, and issues; improving access to high-demand courses; the advising function; management of instructional…

  16. Development of a standardized Intranet database of formulation records for nonsterile compounding, Part 2.

    PubMed

    Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela

    2012-01-01

    In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.

  17. Repetitive Bibliographical Information in Relational Databases.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1988-01-01

    Proposes a solution to the problem of loading repetitive bibliographic information in a microcomputer-based relational database management system. The alternative design described is based on a representational redundancy design and normalization theory. (12 references) (Author/CLB)

  18. Innovative railroad information displays : executive summary

    DOT National Transportation Integrated Search

    1998-01-01

    The objectives ofthis study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making process...

  19. Efficacy of auriculotherapy for constipation in adults: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Yang, Li-Hua; Duan, Pei-Bei; Du, Shi-Zheng; Sun, Jin-Fang; Mei, Si-Juan; Wang, Xiao-Qing; Zhang, Yuan-Yuan

    2014-08-01

    To assess the clinical evidence of auriculotherapy for constipation treatment and to identify the efficacy of groups using Semen vaccariae or magnetic pellets as taped objects in managing constipation. Databases were searched, including five English-language databases (the Cochrane Library, PubMed, Embase, CINAHL, and AMED) and four Chinese medical databases. Only randomized controlled trials were included in the review process. Critical appraisal was conducted using the Cochrane risk of bias tool. Seventeen randomized, controlled trials (RCTs) met the inclusion criteria, of which 2 had low risk of bias. The primary outcome measures were the improvement rate and total effective rate. A meta-analysis of 15 RCTs showed a moderate, significant effect of auriculotherapy in managing constipation compared with controls (relative risk [RR], 2.06; 95% confidence interval [CI], 1.52- 2.79; p<0.00001). The 15 RCTs also showed a moderate, significant effect of auriculotherapy in relieving constipation (RR, 1.28; 95% CI, 1.13-1.44; p<0.0001). For other symptoms associated with constipation, such as abdominal distension or anorexia, results of the meta-analyses showed no statistical significance. Subgroup analysis revealed that use of S. vaccariae and use of magnetic pellets were both statistically favored over the control in relieving constipation. Current evidence illustrated that auriculotherapy, a relatively safe strategy, is probably beneficial in managing constipation. However, most of the eligible RCTs had a high risk of bias, and all were conducted in China. No definitive conclusion can be made because of cultural and geographic differences. Further rigorous RCTs from around the world are warranted to confirm the effect and safety of auriculotherapy for constipation.

  20. Financial capacity in dementia: a systematic review.

    PubMed

    Sudo, Felipe Kenji; Laks, Jerson

    2017-07-01

    Financial capacity (FC) refers to a set of cognitively mediated abilities related to one's competency to manage propriety and income. Identifying intact from impaired FC in older persons with dementia is a growing concern in geriatric practice, but the best methods to assess this function still need to be determined. This study aims to review data on FC in dementia and on instruments used to assess this domain of capacity. Database search was performed in Medline, ISI Web of Knowledge, LILACS and PsycINFO. Studies that objectively assessed FC in dementia of any etiology were included. Of a total of 125 articles, 10 were included. Mild Alzheimer's Disease (AD) was associated with impaired complex FC abilities, namely checkbook management, bank statement management and financial judgment, but simple FC skills were preserved. Moderate AD was associated with impairment in all domains of FC. The Financial Capacity Instrument (FCI) was applied in most of the selected studies and correlated with neuropsychological and neuroimaging variables. Early dementia is associated with partially preserved FC. More validation studies using objective and evidence-based FC assessment tools, such as the FCI, are still needed.

  1. Design of Student Information Management Database Application System for Office and Departmental Target Responsibility System

    NASA Astrophysics Data System (ADS)

    Zhou, Hui

    It is the inevitable outcome of higher education reform to carry out office and departmental target responsibility system, in which statistical processing of student's information is an important part of student's performance review. On the basis of the analysis of the student's evaluation, the student information management database application system is designed by using relational database management system software in this paper. In order to implement the function of student information management, the functional requirement, overall structure, data sheets and fields, data sheet Association and software codes are designed in details.

  2. Information Network Model Query Processing

    NASA Astrophysics Data System (ADS)

    Song, Xiaopu

    Information Networking Model (INM) [31] is a novel database model for real world objects and relationships management. It naturally and directly supports various kinds of static and dynamic relationships between objects. In INM, objects are networked through various natural and complex relationships. INM Query Language (INM-QL) [30] is designed to explore such information network, retrieve information about schema, instance, their attributes, relationships, and context-dependent information, and process query results in the user specified form. INM database management system has been implemented using Berkeley DB, and it supports INM-QL. This thesis is mainly focused on the implementation of the subsystem that is able to effectively and efficiently process INM-QL. The subsystem provides a lexical and syntactical analyzer of INM-QL, and it is able to choose appropriate evaluation strategies and index mechanism to process queries in INM-QL without the user's intervention. It also uses intermediate result structure to hold intermediate query result and other helping structures to reduce complexity of query processing.

  3. An engineering database management system for spacecraft operations

    NASA Technical Reports Server (NTRS)

    Cipollone, Gregorio; Mckay, Michael H.; Paris, Joseph

    1993-01-01

    Studies at ESOC have demonstrated the feasibility of a flexible and powerful Engineering Database Management System in support for spacecraft operations documentation. The objectives set out were three-fold: first an analysis of the problems encountered by the Operations team in obtaining and managing operations documents; secondly, the definition of a concept for operations documentation and the implementation of prototype to prove the feasibility of the concept; and thirdly, definition of standards and protocols required for the exchange of data between the top-level partners in a satellite project. The EDMS prototype was populated with ERS-l satellite design data and has been used by the operations team at ESOC to gather operational experience. An operational EDMS would be implemented at the satellite prime contractor's site as a common database for all technical information surrounding a project and would be accessible by the cocontractor's and ESA teams.

  4. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.

  5. BanTeC: a software tool for management of corneal transplantation.

    PubMed

    López-Alvarez, P; Caballero, F; Trias, J; Cortés, U; López-Navidad, A

    2005-11-01

    Until recently, all cornea information at our tissue bank was managed manually, no specific database or computer tool had been implemented to provide electronic versions of documents and medical reports. The main objective of the BanTeC project was therefore to create a computerized system to integrate and classify all the information and documents used in the center in order to facilitate management of retrieved, transplanted corneal tissues. We used the Windows platform to develop the project. Microsoft Access and Microsoft Jet Engine were used at the database level and Data Access Objects was the chosen data access technology. In short, the BanTeC software seeks to computerize the tissue bank. All the initial stages of the development have now been completed, from specification of needs, program design and implementation of the software components, to the total integration of the final result in the real production environment. BanTeC will allow the generation of statistical reports for analysis to improve our performance.

  6. Innovative railroad information displays : video guide

    DOT National Transportation Integrated Search

    1998-01-01

    The objectives of this study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making proces...

  7. XML Technology Assessment

    DTIC Science & Technology

    2001-01-01

    System (GCCS) Track Database Management System (TDBM) (3) GCCS Integrated Imagery and Intelligence (3) Intelligence Shared Data Server (ISDS) General ...The CTH is a powerful model that will allow more than just message systems to exchange information. It could be used for object-oriented databases, as...of the Naval Integrated Tactical Environmental System I (NITES I) is used as a case study to demonstrate the utility of this distributed component

  8. HLLV avionics requirements study and electronic filing system database development

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This final report provides a summary of achievements and activities performed under Contract NAS8-39215. The contract's objective was to explore a new way of delivering, storing, accessing, and archiving study products and information and to define top level system requirements for Heavy Lift Launch Vehicle (HLLV) avionics that incorporate Vehicle Health Management (VHM). This report includes technical objectives, methods, assumptions, recommendations, sample data, and issues as specified by DPD No. 772, DR-3. The report is organized into two major subsections, one specific to each of the two tasks defined in the Statement of Work: the Index Database Task and the HLLV Avionics Requirements Task. The Index Database Task resulted in the selection and modification of a commercial database software tool to contain the data developed during the HLLV Avionics Requirements Task. All summary information is addressed within each task's section.

  9. From Cultural Path to Cultural Route: a Value-Led Risk Management Method for via Iulia Augusta in Albenga (italy)

    NASA Astrophysics Data System (ADS)

    Van Meerbeek, L.; Barazzetti, L.; Valente, R.

    2017-08-01

    Today, the field of cultural heritage faces many challenges: cultural heritage is always at risk, the large amount of heritage information is often fragmented, climate change impacts cultural heritage and heritage recording can be time-consuming and often results in low accuracy. Four objectives, related to the challenges, were defined during this research work. It proposes a relevant value-led risk management method for cultural heritage, it identifies climate change impact on cultural heritage, it suggests a database lay-out for cultural heritage and demonstrates the potential of remote sensing tools for cultural heritage. The Via Iulia Augusta, a former Roman road in Albenga, was used as case study.

  10. Use of a Relational Database to Support Clinical Research: Application in a Diabetes Program

    PubMed Central

    Lomatch, Diane; Truax, Terry; Savage, Peter

    1981-01-01

    A database has been established to support conduct of clinical research and monitor delivery of medical care for 1200 diabetic patients as part of the Michigan Diabetes Research and Training Center (MDRTC). Use of an intelligent microcomputer to enter and retrieve the data and use of a relational database management system (DBMS) to store and manage data have provided a flexible, efficient method of achieving both support of small projects and monitoring overall activity of the Diabetes Center Unit (DCU). Simplicity of access to data, efficiency in providing data for unanticipated requests, ease of manipulations of relations, security and “logical data independence” were important factors in choosing a relational DBMS. The ability to interface with an interactive statistical program and a graphics program is a major advantage of this system. Out database currently provides support for the operation and analysis of several ongoing research projects.

  11. The influence of the Bible geographic objects peculiarities on the concept of the spatiotemporal geoinformation system

    NASA Astrophysics Data System (ADS)

    Linsebarth, A.; Moscicka, A.

    2010-01-01

    The article describes the infl uence of the Bible geographic object peculiarities on the spatiotemporal geoinformation system of the Bible events. In the proposed concept of this system the special attention was concentrated to the Bible geographic objects and interrelations between the names of these objects and their location in the geospace. In the Bible, both in the Old and New Testament, there are hundreds of geographical names, but the selection of these names from the Bible text is not so easy. The same names are applied for the persons and geographic objects. The next problem which arises is the classification of the geographical object, because in several cases the same name is used for the towns, mountains, hills, valleys etc. Also very serious problem is related to the time-changes of the names. The interrelation between the object name and its location is also complicated. The geographic object of this same name is located in various places which should be properly correlated with the Bible text. Above mentioned peculiarities of Bible geographic objects infl uenced the concept of the proposed system which consists of three databases: reference, geographic object, and subject/thematic. The crucial component of this system is proper architecture of the geographic object database. In the paper very detailed description of this database is presented. The interrelation between the databases allows to the Bible readers to connect the Bible text with the geography of the terrain on which the Bible events occurred and additionally to have access to the other geographical and historical information related to the geographic objects.

  12. Reef Ecosystem Services and Decision Support Database

    EPA Science Inventory

    This scientific and management information database utilizes systems thinking to describe the linkages between decisions, human activities, and provisioning of reef ecosystem goods and services. This database provides: (1) Hierarchy of related topics - Click on topics to navigat...

  13. The Use of a Relational Database in Qualitative Research on Educational Computing.

    ERIC Educational Resources Information Center

    Winer, Laura R.; Carriere, Mario

    1990-01-01

    Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…

  14. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  15. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  16. FJET Database Project: Extract, Transform, and Load

    NASA Technical Reports Server (NTRS)

    Samms, Kevin O.

    2015-01-01

    The Data Mining & Knowledge Management team at Kennedy Space Center is providing data management services to the Frangible Joint Empirical Test (FJET) project at Langley Research Center (LARC). FJET is a project under the NASA Engineering and Safety Center (NESC). The purpose of FJET is to conduct an assessment of mild detonating fuse (MDF) frangible joints (FJs) for human spacecraft separation tasks in support of the NASA Commercial Crew Program. The Data Mining & Knowledge Management team has been tasked with creating and managing a database for the efficient storage and retrieval of FJET test data. This paper details the Extract, Transform, and Load (ETL) process as it is related to gathering FJET test data into a Microsoft SQL relational database, and making that data available to the data users. Lessons learned, procedures implemented, and programming code samples are discussed to help detail the learning experienced as the Data Mining & Knowledge Management team adapted to changing requirements and new technology while maintaining flexibility of design in various aspects of the data management project.

  17. Database documentation of marine mammal stranding and mortality: current status review and future prospects.

    PubMed

    Chan, Derek K P; Tsui, Henry C L; Kot, Brian C W

    2017-11-21

    Databases are systematic tools to archive and manage information related to marine mammal stranding and mortality events. Stranding response networks, governmental authorities and non-governmental organizations have established regional or national stranding networks and have developed unique standard stranding response and necropsy protocols to document and track stranded marine mammal demographics, signalment and health data. The objectives of this study were to (1) describe and review the current status of marine mammal stranding and mortality databases worldwide, including the year established, types of database and their goals; and (2) summarize the geographic range included in the database, the number of cases recorded, accessibility, filter and display methods. Peer-reviewed literature was searched, focussing on published databases of live and dead marine mammal strandings and mortality and information released from stranding response organizations (i.e. online updates, journal articles and annual stranding reports). Databases that were not published in the primary literature or recognized by government agencies were excluded. Based on these criteria, 10 marine mammal stranding and mortality databases were identified, and strandings and necropsy data found in these databases were evaluated. We discuss the results, limitations and future prospects of database development. Future prospects include the development and application of virtopsy, a new necropsy investigation tool. A centralized web-accessed database of all available postmortem multimedia from stranded marine mammals may eventually support marine conservation and policy decisions, which will allow the use of marine animals as sentinels of ecosystem health, working towards a 'One Ocean-One Health' ideal.

  18. Information for Institutional Renewal.

    ERIC Educational Resources Information Center

    Spencer, Richard L.

    1979-01-01

    Discusses a planning, management, and evaluation system, an objective-based planning process, research databases, analytical reports, and transactional data as state-of-the-art tools available to generate data which link research directly to planning for institutional renewal. (RC)

  19. An Improved Database System for Program Assessment

    ERIC Educational Resources Information Center

    Haga, Wayne; Morris, Gerard; Morrell, Joseph S.

    2011-01-01

    This research paper presents a database management system for tracking course assessment data and reporting related outcomes for program assessment. It improves on a database system previously presented by the authors and in use for two years. The database system presented is specific to assessment for ABET (Accreditation Board for Engineering and…

  20. FIREMON Database

    Treesearch

    John F. Caratti

    2006-01-01

    The FIREMON database software allows users to enter data, store, analyze, and summarize plot data, photos, and related documents. The FIREMON database software consists of a Java application and a Microsoft® Access database. The Java application provides the user interface with FIREMON data through data entry forms, data summary reports, and other data management tools...

  1. A Summary of Pavement and Material-Related Databases within the Texas Department of Transportation

    DOT National Transportation Integrated Search

    1999-09-01

    This report summarizes important content and operational details about five different materials and pavement databases currently used by the Texas Department of Transportation (TxDOT). These databases include the Pavement Management Information Syste...

  2. A spatial-temporal system for dynamic cadastral management.

    PubMed

    Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie

    2006-03-01

    A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.

  3. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  4. CruiseViewer: SIOExplorer Graphical Interface to Metadata and Archives.

    NASA Astrophysics Data System (ADS)

    Sutton, D. W.; Helly, J. J.; Miller, S. P.; Chase, A.; Clark, D.

    2002-12-01

    We are introducing "CruiseViewer" as a prototype graphical interface for the SIOExplorer digital library project, part of the overall NSF National Science Digital Library (NSDL) effort. When complete, CruiseViewer will provide access to nearly 800 cruises, as well as 100 years of documents and images from the archives of the Scripps Institution of Oceanography (SIO). The project emphasizes data object accessibility, a rich metadata format, efficient uploading methods and interoperability with other digital libraries. The primary function of CruiseViewer is to provide a human interface to the metadata database and to storage systems filled with archival data. The system schema is based on the concept of an "arbitrary digital object" (ADO). Arbitrary in that if the object can be stored on a computer system then SIOExplore can manage it. Common examples are a multibeam swath bathymetry file, a .pdf cruise report, or a tar file containing all the processing scripts used on a cruise. We require a metadata file for every ADO in an ascii "metadata interchange format" (MIF), which has proven to be highly useful for operability and extensibility. Bulk ADO storage is managed using the Storage Resource Broker, SRB, data handling middleware developed at the San Diego Supercomputer Center that centralizes management and access to distributed storage devices. MIF metadata are harvested from several sources and housed in a relational (Oracle) database. For CruiseViewer, cgi scripts resident on an Apache server are the primary communication and service request handling tools. Along with the CruiseViewer java application, users can query, access and download objects via a separate method that operates through standard web browsers, http://sioexplorer.ucsd.edu. Both provide the functionability to query and view object metadata, and select and download ADOs. For the CruiseViewer application Java 2D is used to add a geo-referencing feature that allows users to select basemap images and have vector shapes representing query results mapped over the basemap in the image panel. The two methods together address a wide range of user access needs and will allow for widespread use of SIOExplorer.

  5. Application research on temporal GIS in the transportation information management system

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Qin, Qianqing; Wang, Chao

    2006-10-01

    The application, development and key matters of applying spatio-temporal GIS to traffic information management system are discussed in this paper by introducing the development of spatio-temporal database, current models of spatio-temporal data, traits of traffic information management system. This paper proposes a method of organizing spatio-temporal data taking road object changes into consideration, and describes its data structure in 3 aspects, including structure of spatio-temporal object, organizing method spatio-temporal data and storage means of spatio-temporal data. Trying to manage types of spatio-temporal data involved in traffic system, such as road information, river information, railway information, social and economical data, and etc, uniformly, efficiently and with low redundancy.

  6. Prototyping Visual Database Interface by Object-Oriented Language

    DTIC Science & Technology

    1988-06-01

    approach is to use object-oriented programming. Object-oriented languages are characterized by three criteria [Ref. 4:p. 1.2.1]: - encapsulation of...made it a sub-class of our DMWindow.Cls, which is discussed later in this chapter. This extension to the application had to be intergrated with our... abnormal behaviors similar to Korth’s discussion of pitfalls in relational database designing. Even extensions like GEM [Ref. 8] that are powerful and

  7. COMET Multimedia modules and objects in the digital library system

    NASA Astrophysics Data System (ADS)

    Spangler, T. C.; Lamos, J. P.

    2003-12-01

    Over the past ten years of developing Web- and CD-ROM-based training materials, the Cooperative Program for Operational Meteorology, Education and Training (COMET) has created a unique archive of almost 10,000 multimedia objects and some 50 web based interactive multimedia modules on various aspects of weather and weather forecasting. These objects and modules, containing illustrations, photographs, animations,video sequences, audio files, are potentially a valuable resource for university faculty and students, forecasters, emergency managers, public school educators, and other individuals and groups needing such materials for educational use. The COMET Modules are available on the COMET educational web site http://www.meted.ucar.edu, and the COMET Multimedia Database (MMDB) makes a collection of the multimedia objects available in a searchable online database for viewing and download over the Internet. Some 3200 objects are already available at the MMDB Website: http://archive.comet.ucar.edu/moria/

  8. Reuseable Objects Software Environment (ROSE): Introduction to Air Force Software Reuse Workshop

    NASA Technical Reports Server (NTRS)

    Cottrell, William L.

    1994-01-01

    The Reusable Objects Software Environment (ROSE) is a common, consistent, consolidated implementation of software functionality using modern object oriented software engineering including designed-in reuse and adaptable requirements. ROSE is designed to minimize abstraction and reduce complexity. A planning model for the reverse engineering of selected objects through object oriented analysis is depicted. Dynamic and functional modeling are used to develop a system design, the object design, the language, and a database management system. The return on investment for a ROSE pilot program and timelines are charted.

  9. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  10. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Dezaki, Kyoko; Saeki, Makoto

    Rapid progress in advanced informationalization has increased need to enforce documentation activities in industries. Responding to it Tokin Corporation has been engaged in database construction for patent information, technical reports and so on accumulated inside the Company. Two results are obtained; One is TOPICS, inhouse patent information management system, the other is TOMATIS, management and technical information system by use of personal computers and all-purposed relational database software. These systems aim at compiling databases of patent and technological management information generated internally and externally by low labor efforts as well as low cost, and providing for comprehensive information company-wide. This paper introduces the outline of these systems and how they are actually used.

  11. The ATLAS conditions database architecture for the Muon spectrometer

    NASA Astrophysics Data System (ADS)

    Verducci, Monica; ATLAS Muon Collaboration

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  12. New perspectives in toxicological information management, and the role of ISSTOX databases in assessing chemical mutagenicity and carcinogenicity.

    PubMed

    Benigni, Romualdo; Battistelli, Chiara Laura; Bossa, Cecilia; Tcheremenskaia, Olga; Crettaz, Pierre

    2013-07-01

    Currently, the public has access to a variety of databases containing mutagenicity and carcinogenicity data. These resources are crucial for the toxicologists and regulators involved in the risk assessment of chemicals, which necessitates access to all the relevant literature, and the capability to search across toxicity databases using both biological and chemical criteria. Towards the larger goal of screening chemicals for a wide range of toxicity end points of potential interest, publicly available resources across a large spectrum of biological and chemical data space must be effectively harnessed with current and evolving information technologies (i.e. systematised, integrated and mined), if long-term screening and prediction objectives are to be achieved. A key to rapid progress in the field of chemical toxicity databases is that of combining information technology with the chemical structure as identifier of the molecules. This permits an enormous range of operations (e.g. retrieving chemicals or chemical classes, describing the content of databases, finding similar chemicals, crossing biological and chemical interrogations, etc.) that other more classical databases cannot allow. This article describes the progress in the technology of toxicity databases, including the concepts of Chemical Relational Database and Toxicological Standardized Controlled Vocabularies (Ontology). Then it describes the ISSTOX cluster of toxicological databases at the Istituto Superiore di Sanitá. It consists of freely available databases characterised by the use of modern information technologies and by curation of the quality of the biological data. Finally, this article provides examples of analyses and results made possible by ISSTOX.

  13. The Novel Object and Unusual Name (NOUN) Database: A collection of novel images for use in experimental research.

    PubMed

    Horst, Jessica S; Hout, Michael C

    2016-12-01

    Many experimental research designs require images of novel objects. Here we introduce the Novel Object and Unusual Name (NOUN) Database. This database contains 64 primary novel object images and additional novel exemplars for ten basic- and nine global-level object categories. The objects' novelty was confirmed by both self-report and a lack of consensus on questions that required participants to name and identify the objects. We also found that object novelty correlated with qualifying naming responses pertaining to the objects' colors. The results from a similarity sorting task (and a subsequent multidimensional scaling analysis on the similarity ratings) demonstrated that the objects are complex and distinct entities that vary along several featural dimensions beyond simply shape and color. A final experiment confirmed that additional item exemplars comprised both sub- and superordinate categories. These images may be useful in a variety of settings, particularly for developmental psychology and other research in the language, categorization, perception, visual memory, and related domains.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buche, D. L.; Perry, S.

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.

  15. Exploring the feasibility of traditional image querying tasks for industrial radiographs

    NASA Astrophysics Data System (ADS)

    Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.

    2015-08-01

    Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.

  16. A Comparison of Global Indexing Schemes to Facilitate Earth Science Data Management

    NASA Astrophysics Data System (ADS)

    Griessbaum, N.; Frew, J.; Rilee, M. L.; Kuo, K. S.

    2017-12-01

    Recent advances in database technology have led to systems optimized for managing petabyte-scale multidimensional arrays. These array databases are a good fit for subsets of the Earth's surface that can be projected into a rectangular coordinate system with acceptable geometric fidelity. However, for global analyses, array databases must address the same distortions and discontinuities that apply to map projections in general. The array database SciDB supports enormous databases spread across thousands of computing nodes. Additionally, the following SciDB characteristics are particularly germane to the coordinate system problem: SciDB efficiently stores and manipulates sparse (i.e. mostly empty) arrays. SciDB arrays have 64-bit indexes. SciDB supports user-defined data types, functions, and operators. We have implemented two geospatial indexing schemes in SciDB. The simplest uses two array dimensions to represent longitude and latitude. For representation as 64-bit integers, the coordinates are multiplied by a scale factor large enough to yield an appropriate Earth surface resolution (e.g., a scale factor of 100,000 yields a resolution of approximately 1m at the equator). Aside from the longitudinal discontinuity, the principal disadvantage of this scheme is its fixed scale factor. The second scheme uses a single array dimension to represent the bit-codes for locations in a hierarchical triangular mesh (HTM) coordinate system. A HTM maps the Earth's surface onto an octahedron, and then recursively subdivides each triangular face to the desired resolution. Earth surface locations are represented as the concatenation of an octahedron face code and a quadtree code within the face. Unlike our integerized lat-lon scheme, the HTM allow for objects of different size (e.g., pixels with differing resolutions) to be represented in the same indexing scheme. We present an evaluation of the relative utility of these two schemes for managing and analyzing MODIS swath data.

  17. Network Configuration of Oracle and Database Programming Using SQL

    NASA Technical Reports Server (NTRS)

    Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.

    2000-01-01

    A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).

  18. Schema Versioning for Multitemporal Relational Databases.

    ERIC Educational Resources Information Center

    De Castro, Cristina; Grandi, Fabio; Scalas, Maria Rita

    1997-01-01

    Investigates new design options for extended schema versioning support for multitemporal relational databases. Discusses the improved functionalities they may provide. Outlines options and basic motivations for the new design solutions, as well as techniques for the management of proposed schema versioning solutions, includes algorithms and…

  19. Scale-Independent Relational Query Processing

    DTIC Science & Technology

    2013-10-04

    source options are also available, including Postgresql, MySQL , and SQLite. These mod- ern relational databases are generally very complex software systems...and Their Application to Data Stream Management. IGI Global, 2010. [68] George Reese. Database Programming with JDBC and Java , Second Edition. Ed. by

  20. Leveraging Relational Technology through Industry Partnerships.

    ERIC Educational Resources Information Center

    Brush, Leonard M.; Schaller, Anthony J.

    1988-01-01

    Carnegie Mellon University has leveraged its technological expertise with database management systems (DBMS) into joint technological and developmental partnerships with DBMS and application software vendors. Carnegie's relational database strategy, the strategy of partnerships and how they were formed, and how the partnerships are doing are…

  1. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  2. A Functional Framework for Database Management Systems.

    DTIC Science & Technology

    1980-02-01

    Furctionat Approach 13 7.2. Objects in a 080S 14 ".2.1. ExternaL Objects 15 ;.2.2. Conceptual Objects 15 -. 2.3. Internal Objects 15 7.2.4. Externat...standpoint of their ’-efinitional and conceptual goals. 2. To make it posibLe to define arc specify the neeos as the ’irst phase cf the design process...methods. This ain is analogcus to the one in which programming language techrotogy has beer captured and supported through the conceptual lan;4age

  3. ERMes: Open Source Simplicity for Your E-Resource Management

    ERIC Educational Resources Information Center

    Doering, William; Chilton, Galadriel

    2009-01-01

    ERMes, the latest version of electronic resource management system (ERM), is a relational database; content in different tables connects to, and works with, content in other tables. ERMes requires Access 2007 (Windows) or Access 2008 (Mac) to operate as the database utilizes functionality not available in previous versions of Microsoft Access. The…

  4. Intelligent data management

    NASA Technical Reports Server (NTRS)

    Campbell, William J.

    1985-01-01

    Intelligent data management is the concept of interfacing a user to a database management system with a value added service that will allow a full range of data management operations at a high level of abstraction using human written language. The development of such a system will be based on expert systems and related artificial intelligence technologies, and will allow the capturing of procedural and relational knowledge about data management operations and the support of a user with such knowledge in an on-line, interactive manner. Such a system will have the following capabilities: (1) the ability to construct a model of the users view of the database, based on the query syntax; (2) the ability to transform English queries and commands into database instructions and processes; (3) the ability to use heuristic knowledge to rapidly prune the data space in search processes; and (4) the ability to use an on-line explanation system to allow the user to understand what the system is doing and why it is doing it. Additional information is given in outline form.

  5. Military Personnel: DOD Has Processes for Operating and Managing Its Sexual Assault Incident Database

    DTIC Science & Technology

    2017-01-01

    each change and its implementation status as well as supporting the audit of products to verify conformance to requirements. Through these change...management process for modifying DSAID aligns with information technology and project management industry standards. GAO reviewed DOD documents, and...Acknowledgments 32 Related GAO Products 33 Tables Table 1: Roles and Access Rights for Users of the Defense Sexual Assault Incident Database (DSAID

  6. Commanding and Controlling Satellite Clusters (IEEE Intelligent Systems, November/December 2000)

    DTIC Science & Technology

    2000-01-01

    real - time operating system , a message-passing OS well suited for distributed...ground Flight processors ObjectAgent RTOS SCL RTOS RDMS Space command language Real - time operating system Rational database management system TS-21 RDMS...engineer with Princeton Satellite Systems. She is working with others to develop ObjectAgent software to run on the OSE Real Time Operating System .

  7. Just in time: technology to disseminate curriculum and manage educational requirements with mobile technology.

    PubMed

    Ferenchick, Gary; Fetters, Moses; Carse, A Mervyn

    2008-01-01

    Learning objectives intended to guide clinical education may be of limited usefulness if they are unavailable to students when interacting with patients. We developed, implemented, and evaluated a Web-based process to disseminate the Clerkship Directors of Internal Medicine curricular objectives to students via handheld computers and for students to upload patient logs to a central database. We delivered this program to all students in our geographically dispersed system, with minimal technological problems. The total number of "hits" on curricular objectives was 8,932 (averaging 149 per student or approximately 2.7 times daily). The average number of "hits" per problem was 470, ranging from 18 for smoking cessation to 1,784 for chest pain. The total number of patient problems logged by students was 9,579, and 91% of students met our prespecified criteria for numbers and types of patients. Dissemination and use of curricular learning objectives and related tools is enhanced with mobile technology.

  8. Bio-optical data integration based on a 4 D database system approach

    NASA Astrophysics Data System (ADS)

    Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.

    2015-04-01

    Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.

  9. TRANSPORTATION RESEARCH IMPLEMENTATION MANAGEMENT : DEVELOPMENT OF PERFORMANCE BASED PROCESSES, METRICS, AND TOOLS

    DOT National Transportation Integrated Search

    2018-02-02

    The objective of this study is to develop an evidencebased research implementation database and tool to support research implementation at the Georgia Department of Transportation (GDOT).A review was conducted drawing from the (1) implementati...

  10. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  11. Using Pattern Recognition and Discriminance Analysis to Predict Critical Events in Large Signal Databases

    NASA Astrophysics Data System (ADS)

    Feller, Jens; Feller, Sebastian; Mauersberg, Bernhard; Mergenthaler, Wolfgang

    2009-09-01

    Many applications in plant management require close monitoring of equipment performance, in particular with the objective to prevent certain critical events. At each point in time, the information available to classify the criticality of the process, is represented through the historic signal database as well as the actual measurement. This paper presents an approach to detect and predict critical events, based on pattern recognition and discriminance analysis.

  12. A Database-Based and Web-Based Meta-CASE System

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki; Sgirka, Rünno

    Each Computer Aided Software Engineering (CASE) system provides support to a software process or specific tasks or activities that are part of a software process. Each meta-CASE system allows us to create new CASE systems. The creators of a new CASE system have to specify abstract syntax of the language that is used in the system and functionality as well as non-functional properties of the new system. Many meta-CASE systems record their data directly in files. In this paper, we introduce a meta-CASE system, the enabling technology of which is an object-relational database system (ORDBMS). The system allows users to manage specifications of languages and create models by using these languages. The system has web-based and form-based user interface. We have created a proof-of-concept prototype of the system by using PostgreSQL ORDBMS and PHP scripting language.

  13. A practice-based information system for multi-disciplinary care of chronically ill patients: what information do we need? The Community Care Coordination Network Database Group.

    PubMed Central

    Moran, W. P.; Messick, C.; Guerette, P.; Anderson, R.; Bradham, D.; Wofford, J. L.; Velez, R.

    1994-01-01

    Primary care physicians provide longitudinal care for chronically ill individuals in concert with many other community-based disciplines. The care management of these individuals requires data not traditionally collected during the care of well, or acutely ill individuals. These data not only concern the patient, in the form of patient functional status, mental status and affect, but also pertain to the caregiver, home environment, and the formal community health and social service system. The goal of the Community Care Coordination Network is to build a primary care-based information system to share patient data and communicate patient related information among the community-based multi-disciplinary teams. One objective of the Community Care Coordination Network is to create a Community Care Database for chronically ill individuals by identifying those data elements necessary for efficient multi-disciplinary care. PMID:7949995

  14. Richardson Instructional Management System (RIMS). How to Blend a Computerized Objectives-Referenced Testing System, Distributive Data Processing, and Systemwide Evaluation.

    ERIC Educational Resources Information Center

    Riegel, N. Blyth

    Recent changes in the structure of curriculum and the instructional system in Texas have required a major reorganization of teaching, evaluating, budgeting, and planning activities in the local education agencies, which has created the need for a database. The history of Richardson Instructional Management System (RIMS), its data processing…

  15. The North Central Forest Inventory and Analysis timber product output database--a regional composite approach.

    Treesearch

    Dennis M. May

    1998-01-01

    Discusses a regional composite approach to managing timber product output data in a relational database. Describes the development and structure of the regional composite database and demonstrates its use in addressing everyday timber product output information needs.

  16. The Implementation of Medical Informatics in the National Competence Based Catalogue of Learning Objectives for Undergraduate Medical Education (NKLM).

    PubMed

    Behrends, Marianne; Steffens, Sandra; Marschollek, Michael

    2017-01-01

    The National Competence Based Catalogue of Learning Objectives for Undergraduate Medical Education (NKLM) describes medical skills and attitudes without being ordered by subjects or organs. Thus, the NKLM enables systematic curriculum mapping and supports curricular transparency. In this paper we describe where learning objectives related to Medical Informatics (MI) in Hannover coincide with other subjects and where they are taught exclusively in MI. An instance of the web-based MERLIN-database was used for the mapping process. In total 52 learning objectives overlapping with 38 other subjects could be allocated to MI. No overlap exists for six learning objectives describing explicitly topics of information technology or data management for scientific research. Most of the overlap was found for learning objectives relating to documentation and aspects of data privacy. The identification of numerous shared learning objectives with other subjects does not mean that other subjects teach the same content as MI. Identifying common learning objectives rather opens up the possibility for teaching cooperations which could lead to an important exchange and hopefully an improvement in medical education. Mapping of a whole medical curriculum offers the opportunity to identify common ground between MI and other medical subjects. Furthermore, in regard to MI, the interaction with other medical subjects can strengthen its role in medical education.

  17. Using statistical process control to make data-based clinical decisions.

    PubMed

    Pfadt, A; Wheeler, D J

    1995-01-01

    Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.

  18. ARCADIA: a system for the integration of angiocardiographic data and images by an object-oriented DBMS.

    PubMed

    Pinciroli, F; Combi, C; Pozzi, G

    1995-02-01

    Use of data base techniques to store medical records has been going on for more than 40 years. Some aspects still remain unresolved, e.g., the management of textual data and image data within a single system. Object-orientation techniques applied to a database management system (DBMS) allow the definition of suitable data structures (e.g., to store digital images): some facilities allow the use of predefined structures when defining new ones. Currently available object-oriented DBMS, however, still need improvements both in the schema update and in the query facilities. This paper describes a prototype of a medical record that includes some multimedia features, managing both textual and image data. The prototype here described considers data from the medical records of patients subjected to percutaneous transluminal coronary artery angioplasty. We developed it on a Sun workstation with a Unix operating system and ONTOS as an object-oriented DBMS.

  19. The Development of a Korean Drug Dosing Database

    PubMed Central

    Kim, Sun Ah; Kim, Jung Hoon; Jang, Yoo Jin; Jeon, Man Ho; Hwang, Joong Un; Jeong, Young Mi; Choi, Kyung Suk; Lee, Iyn Hyang; Jeon, Jin Ok; Lee, Eun Sook; Lee, Eun Kyung; Kim, Hong Bin; Chin, Ho Jun; Ha, Ji Hye; Kim, Young Hoon

    2011-01-01

    Objectives This report describes the development process of a drug dosing database for ethical drugs approved by the Korea Food & Drug Administration (KFDA). The goal of this study was to develop a computerized system that supports physicians' prescribing decisions, particularly in regards to medication dosing. Methods The advisory committee, comprised of doctors, pharmacists, and nurses from the Seoul National University Bundang Hospital, pharmacists familiar with drug databases, KFDA officials, and software developers from the BIT Computer Co. Ltd. analyzed approved KFDA drug dosing information, defined the fields and properties of the information structure, and designed a management program used to enter dosing information. The management program was developed using a web based system that allows multiple researchers to input drug dosing information in an organized manner. The whole process was improved by adding additional input fields and eliminating the unnecessary existing fields used when the dosing information was entered, resulting in an improved field structure. Results A total of 16,994 drugs sold in the Korean market in July 2009, excluding the exclusion criteria (e.g., radioactivity drugs, X-ray contrast medium), usage and dosing information were made into a database. Conclusions The drug dosing database was successfully developed and the dosing information for new drugs can be continually maintained through the management mode. This database will be used to develop the drug utilization review standards and to provide appropriate dosing information. PMID:22259729

  20. Evidence generation from healthcare databases: recommendations for managing change.

    PubMed

    Bourke, Alison; Bate, Andrew; Sauer, Brian C; Brown, Jeffrey S; Hall, Gillian C

    2016-07-01

    There is an increasing reliance on databases of healthcare records for pharmacoepidemiology and other medical research, and such resources are often accessed over a long period of time so it is vital to consider the impact of changes in data, access methodology and the environment. The authors discuss change in communication and management, and provide a checklist of issues to consider for both database providers and users. The scope of the paper is database research, and changes are considered in relation to the three main components of database research: the data content itself, how it is accessed, and the support and tools needed to use the database. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Flight Deck Interval Management Display. [Elements, Information and Annunciations Database User Guide

    NASA Technical Reports Server (NTRS)

    Lancaster, Jeff; Dillard, Michael; Alves, Erin; Olofinboba, Olu

    2014-01-01

    The User Guide details the Access Database provided with the Flight Deck Interval Management (FIM) Display Elements, Information, & Annunciations program. The goal of this User Guide is to support ease of use and the ability to quickly retrieve and select items of interest from the Database. The Database includes FIM Concepts identified in a literature review preceding the publication of this document. Only items that are directly related to FIM (e.g., spacing indicators), which change or enable FIM (e.g., menu with control buttons), or which are affected by FIM (e.g., altitude reading) are included in the database. The guide has been expanded from previous versions to cover database structure, content, and search features with voiced explanations.

  2. Multiresource inventories incorporating GIS, GPS, and database management systems

    Treesearch

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  3. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  4. The relational database model and multiple multicenter clinical trials.

    PubMed

    Blumenstein, B A

    1989-12-01

    The Southwest Oncology Group (SWOG) chose to use a relational database management system (RDBMS) for the management of data from multiple clinical trials because of the underlying relational model's inherent flexibility and the natural way multiple entity types (patients, studies, and participants) can be accommodated. The tradeoffs to using the relational model as compared to using the hierarchical model include added computing cycles due to deferred data linkages and added procedural complexity due to the necessity of implementing protections against referential integrity violations. The SWOG uses its RDBMS as a platform on which to build data operations software. This data operations software, which is written in a compiled computer language, allows multiple users to simultaneously update the database and is interactive with respect to the detection of conditions requiring action and the presentation of options for dealing with those conditions. The relational model facilitates the development and maintenance of data operations software.

  5. Automated Hierarchical to CODASYL (Conference on Data Systems Languages) Database Interface Schema Translator.

    DTIC Science & Technology

    1983-12-16

    management system (DBMS) is to record and maintain information used by an organization in the organization’s decision-making process. Some advantages of a...independence. Database Management Systems are classified into three major models; relational, network, and hierarchical. Each model uses a software...feeling impedes the overall effectiveness of the 4-" Acquisition Management Information System (AMIS), which currently uses S2k. The size of the AMIS

  6. Documentation of a spatial data-base management system for monitoring pesticide application in Washington

    USGS Publications Warehouse

    Schurr, K.M.; Cox, S.E.

    1994-01-01

    The Pesticide-Application Data-Base Management System was created as a demonstration project and was tested with data submitted to the Washington State Department of Agriculture by pesticide applicators from a small geographic area. These data were entered into the Department's relational data-base system and uploaded into the system's ARC/INFO files. Locations for pesticide applica- tions are assigned within the Public Land Survey System grids, and ARC/INFO programs in the Pesticide-Application Data-Base Management System can subdivide each survey section into sixteen idealized quarter-quarter sections for display map grids. The system provides data retrieval and geographic information system plotting capabilities from a menu of seven basic retrieval options. Additionally, ARC/INFO coverages can be created from the retrieved data when required for particular applications. The Pesticide-Application Data-Base Management System, or the general principles used in the system, could be adapted to other applica- tions or to other states.

  7. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    NASA Astrophysics Data System (ADS)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  8. Petition Requesting the Administrator Object to Operating Permits Issued to Consolidated Environmental Management, Inc./Nucor Steel Louisiana

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  9. Petitior Requesting the Administrator Object to Operating Permit Issued to Consolidated Environmental Management, Inc./Nucor Steel

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  10. Order Responding to Three Petitions Requesting the EPA Object to Operating Permits for Consolidated Environmental Management, Inc. - Nucor Steel Louisiana

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  11. Petition to Object to the Proposed Operating Permit for Consolidated Environmental Management, Inc., Nucor Steel Facility in Romeville, Louisiana

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  12. Petition Requesting the Administrator Object to Title V Operating Permit for Consolidated Environmental Management, Inc. - Nucor Steel Louisiana

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  13. An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.

    PubMed

    Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S

    1996-02-01

    In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.

  14. A Database of Systems Management Cases

    DTIC Science & Technology

    1990-09-01

    of weapons systems that effectively meet threats and national strategic objectives (28). Vanguard was the responsibility of HQ AFSC/ XRP , the Long...control of the process located in XRP . These areas were strategic offense; strategic defense; tactical; command, control, and communication (C3

  15. Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)

    USGS Publications Warehouse

    Tessler, Steven

    2003-01-01

    The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  16. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  17. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    NASA Astrophysics Data System (ADS)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  18. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  19. Data management and language enhancement for generalized set theory computer language for operation of large relational databases

    NASA Technical Reports Server (NTRS)

    Finley, Gail T.

    1988-01-01

    This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.

  20. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  1. Interactive, Automated Management of Icing Data

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.

    2009-01-01

    IceVal DatAssistant is software (see figure) that provides an automated, interactive solution for the management of data from research on aircraft icing. This software consists primarily of (1) a relational database component used to store ice shape and airfoil coordinates and associated data on operational and environmental test conditions and (2) a graphically oriented database access utility, used to upload, download, process, and/or display data selected by the user. The relational database component consists of a Microsoft Access 2003 database file with nine tables containing data of different types. Included in the database are the data for all publicly releasable ice tracings with complete and verifiable test conditions from experiments conducted to date in the Glenn Research Center Icing Research Tunnel. Ice shapes from computational simulations with the correspond ing conditions performed utilizing the latest version of the LEWICE ice shape prediction code are likewise included, and are linked to the equivalent experimental runs. The database access component includes ten Microsoft Visual Basic 6.0 (VB) form modules and three VB support modules. Together, these modules enable uploading, downloading, processing, and display of all data contained in the database. This component also affords the capability to perform various database maintenance functions for example, compacting the database or creating a new, fully initialized but empty database file.

  2. Information technologies in public health management: a database on biocides to improve quality of life.

    PubMed

    Roman, C; Scripcariu, L; Diaconescu, Rm; Grigoriu, A

    2012-01-01

    Biocides for prolonging the shelf life of a large variety of materials have been extensively used over the last decades. It has estimated that the worldwide biocide consumption to be about 12.4 billion dollars in 2011, and is expected to increase in 2012. As biocides are substances we get in contact with in our everyday lives, access to this type of information is of paramount importance in order to ensure an appropriate living environment. Consequently, a database where information may be quickly processed, sorted, and easily accessed, according to different search criteria, is the most desirable solution. The main aim of this work was to design and implement a relational database with complete information about biocides used in public health management to improve the quality of life. Design and implementation of a relational database for biocides, by using the software "phpMyAdmin". A database, which allows for an efficient collection, storage, and management of information including chemical properties and applications of a large quantity of biocides, as well as its adequate dissemination into the public health environment. The information contained in the database herein presented promotes an adequate use of biocides, by means of information technologies, which in consequence may help achieve important improvement in our quality of life.

  3. ADA Implementation Issues as Discovered through a Literature Survey of Applications Outside the United States

    DTIC Science & Technology

    1992-03-01

    compile time, ensuring that operations conducted are appropriate for the object type. Each implementation requires a database known as the program...Finnish bank being developed by Nokia • Oil drilling control system managed by Sedco- Forex * Vigile - an industrial installation supervisor project by...user interface and Oracle database backend control. The software is being developed in Ada under DOD-STD-2167 under OS/2. BELGIUM BATS S.A. Project title

  4. Linking Multiple Databases: Term Project Using "Sentences" DBMS.

    ERIC Educational Resources Information Center

    King, Ronald S.; Rainwater, Stephen B.

    This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…

  5. DataSpread: Unifying Databases and Spreadsheets.

    PubMed

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  6. DataSpread: Unifying Databases and Spreadsheets

    PubMed Central

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-01-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current “pane” (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases. PMID:26900487

  7. Managing hydrological measurements for small and intermediate projects: RObsDat

    NASA Astrophysics Data System (ADS)

    Reusser, Dominik E.

    2014-05-01

    Hydrological measurements need good management for the data not to be lost. Multiple, often overlapping files from various loggers with heterogeneous formats need to be merged. Data needs to be validated and cleaned and subsequently converted to the format for the hydrological target application. Preferably, all these steps should be easily tracable. RObsDat is an R package designed to support such data management. It comes with a command line user interface to support hydrologists to enter and adjust their data in a database following the Observations Data Model (ODM) standard by QUASHI. RObsDat helps in the setup of the database within one of the free database engines MySQL, PostgreSQL or SQLite. It imports the controlled water vocabulary from the QUASHI web service and provides a smart interface between the hydrologist and the database: Already existing data entries are detected and duplicates avoided. The data import function converts different data table designes to make import simple. Cleaning and modifications of data are handled with a simple version control system. Variable and location names are treated in a user friendly way, accepting and processing multiple versions. A new development is the use of spacetime objects for subsequent processing.

  8. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.

  9. SPIRE Data-Base Management System

    NASA Technical Reports Server (NTRS)

    Fuechsel, C. F.

    1984-01-01

    Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.

  10. The relationship between doctors' and nurses' own weight status and their weight management practices: a systematic review.

    PubMed

    Zhu, D Q; Norman, I J; While, A E

    2011-06-01

    It has been established that health professionals' smoking and physical activity influence their related health-promoting behaviours, but it is unclear whether health professionals' weight status also influences their related professional practices. A systematic review was conducted to understand the relationship between personal weight status and weight management practices. Nine eligible studies were identified from a search of the Cochrane Library, MEDLINE, EMBASE, PsycINFO, CINAHL and Chinese databases. All included studies were cross-sectional surveys employing self-reported questionnaires. Weight management practice variables studied were classified under six practice indicators, developed from weight management guidelines. Syntheses of the findings from the selected studies suggest that: normal weight doctors and nurses were more likely than those who were overweight to use strategies to prevent obesity in-patients, and, also, provide overweight or obese patients with general advice to achieve weight loss. Doctors' and nurses' own weight status was not found to be significantly related to their referral and assessment of overweight or obese patients, and associations with their relevant knowledge/skills and specific treatment behaviours were inconsistent. Additionally, in female, primary care providers, relevant knowledge and training, self-efficacy and a clear professional identity emerged as positive predictors of weight management practices. This review's findings will need to be confirmed by prospective theoretically driven studies, which employ objective measures of weight status and weight management practices and involve multivariate analyses to identify the relative contribution of weight status to weight management. © 2011 The Authors. obesity reviews © 2011 International Association for the Study of Obesity.

  11. The risky business of being an entomologist: A systematic review.

    PubMed

    Stanhope, Jessica; Carver, Scott; Weinstein, Philip

    2015-07-01

    Adverse work-related health outcomes are a significant problem worldwide. Entomologists, including arthropod breeders, are a unique occupational group exposed to potentially harmful arthropods, pesticides, and other more generic hazards. These exposures may place them at risk of a range of adverse work-related health outcomes. To determine what adverse work-related health outcomes entomologists have experienced, the incidence/prevalence of these outcomes, and what occupational management strategies have been employed by entomologists, and their effectiveness. A systematic search of eight databases was undertaken to identify studies informing the review objectives. Data pertaining to country, year, design, work-exposure, adverse work-related health outcomes, incidence/prevalence of these outcomes, and occupational management strategies were extracted, and reported descriptively. Results showed entomologists experienced work-related allergies, venom reactions, infections, infestations and delusional parasitosis. These related to exposure to insects, arachnids, chilopods and entognathans, and non-arthropod exposures, e.g. arthropod feed. Few studies reported the incidence/prevalence of such conditions, or work-related management strategies utilised by entomologists. There were no studies that specifically investigated the effectiveness of potential management strategies for entomologists as a population. Indeed, critical appraisal analysis indicated poor research quality in this area, which is a significant research gap. Entomologists are a diverse, unique occupational group, at risk of a range of adverse work-related health outcomes. This study represents the first systematic review of their work-related health risks. Future studies investigating the prevalence of adverse work-related health outcomes for entomologists, and the effectiveness of management strategies are warranted to decrease the disease burden of this otherwise understudied group. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Leveraging Cognitive Context for Object Recognition

    DTIC Science & Technology

    2014-06-01

    learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by

  13. [Bibliografic resources on chemical risk administration and prevention].

    PubMed

    Calera Rubio, Alfonso A; Juan Quilis, Verónica; López Samaniego, Luz M; Caballero Pérez, Pablo; Ronda Pérez, Elena

    2005-01-01

    The documentation produced by public and private institutions in relation to the chemical risk constitutes an essential tool for prevention. The objective of this research is to locate and to revise the documents related to the management of the prevention of chemical risk focus to PYMES in Spain from 1995 to 2004. The methodology carried out for the selection of the bibliographical materials has been the consultation of automated databases and Web pages. 812 documents have been identified. Most corresponds to grey literature. The thematic more frequent has been the security and the most frequent objective of the papers has been the prevention. Most of the documents go to the technical sector. The results suggest that although that there is a great diversity of documents in Spain dedicated to the prevention of chemical risk it seems convenient: 1) to increase their diffusion, 2) to pay attention to the communication of the risks, 3) to investigate and to translate the research in good practice.

  14. National information network and database system of hazardous waste management in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Hongchang

    1996-12-31

    Industries in China generate large volumes of hazardous waste, which makes it essential for the nation to pay more attention to hazardous waste management. National laws and regulations, waste surveys, and manifest tracking and permission systems have been initiated. Some centralized hazardous waste disposal facilities are under construction. China`s National Environmental Protection Agency (NEPA) has also obtained valuable information on hazardous waste management from developed countries. To effectively share this information with local environmental protection bureaus, NEPA developed a national information network and database system for hazardous waste management. This information network will have such functions as information collection, inquiry,more » and connection. The long-term objective is to establish and develop a national and local hazardous waste management information network. This network will significantly help decision makers and researchers because it will be easy to obtain information (e.g., experiences of developed countries in hazardous waste management) to enhance hazardous waste management in China. The information network consists of five parts: technology consulting, import-export management, regulation inquiry, waste survey, and literature inquiry.« less

  15. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  16. Construction of a Linux based chemical and biological information system.

    PubMed

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  17. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    NASA Astrophysics Data System (ADS)

    Shao, Weber; Kupelian, Patrick A.; Wang, Jason; Low, Daniel A.; Ruan, Dan

    2014-03-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  18. Information system of mineral deposits in Slovenia

    NASA Astrophysics Data System (ADS)

    Hribernik, K.; Rokavec, D.; Šinigioj, J.; Šolar, S.

    2010-03-01

    At the Geologic Survey of Slovenia the need for complex overview and control of the deposits of available non-metallic mineral raw materials and of their exploitations became urgent. In the framework of the Geologic Information System we established the Database of non-metallic mineral deposits comprising all important data of deposits and concessionars. Relational database is built with program package MS Access, but in year 2008 we plan to transfer it on SQL server. In the evidence there is 272 deposits and 200 concessionars. The mineral resources information system of Slovenia, which was started back in 2002, consists of two integrated parts, mentioned relational database of mineral deposits, which relates information in tabular way so that rules of relational algebra can be applied, and geographic information system (GIS), which relates spatial information of deposits. . The complex relationships between objects and the concepts of normalized data structures, lead to the practical informative and useful data model, transparent to the user and to better decision-making by allowing future scenarios to be developed and inspected. Computerized storage, and display system is as already said, developed and managed under the support of Geological Survey of Slovenia, which conducts research on the occurrence, quality, quantity, and availability of mineral resources in order to help the Nation make informed decisions using earth-science information. Information about deposit is stored in records in approximately hundred data fields. A numeric record number uniquely identifies each site. The data fields are grouped under principal categories. Each record comprise elementary data of deposit (name, type, location, prospect, rock), administrative data (concessionar, number of decree in official paper, object of decree, number of contract and its duration) and data of mineral resource produced amount and size of exploration area). The data can also be searched, sorted and printed using any of these fields. New records are being added annually, and existing records updated or upgraded. Relational database is connected with scanned exploration/exploitation areas of deposits, defined on the base of digital ortofoto. Register of those areas is indispensable because of spatial planning and spatial municipal and regional strategy development. Database is also part of internet application for quick search and review of data and part of web page of mineral resources of Slovenia. The technology chosen for internet application is ESRI's ArcIMS Internet Map Server. ArcIMS allows users to readily and easily display, analyze, and interpret spatial data from desktop using a Web browser connected to the Internet. We believe that there is an opportunity for cooperation within this activity. We can offer a single location where users can come to browse relatively simply for geoscience-related digital data sets.

  19. Web-based Electronic Sharing and RE-allocation of Assets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverett, Dave; Miller, Robert A.; Berlin, Gary J.

    2002-09-09

    The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less

  20. Starbase Data Tables: An ASCII Relational Database for Unix

    NASA Astrophysics Data System (ADS)

    Roll, John

    2011-11-01

    Database management is an increasingly important part of astronomical data analysis. Astronomers need easy and convenient ways of storing, editing, filtering, and retrieving data about data. Commercial databases do not provide good solutions for many of the everyday and informal types of database access astronomers need. The Starbase database system with simple data file formatting rules and command line data operators has been created to answer this need. The system includes a complete set of relational and set operators, fast search/index and sorting operators, and many formatting and I/O operators. Special features are included to enhance the usefulness of the database when manipulating astronomical data. The software runs under UNIX, MSDOS and IRAF.

  1. A geodata warehouse: Using denormalisation techniques as a tool for delivering spatially enabled integrated geological information to geologists

    NASA Astrophysics Data System (ADS)

    Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham

    2016-11-01

    New requirements to understand geological properties in three dimensions have led to the development of PropBase, a data structure and delivery tools to deliver this. At the BGS, relational database management systems (RDBMS) has facilitated effective data management using normalised subject-based database designs with business rules in a centralised, vocabulary controlled, architecture. These have delivered effective data storage in a secure environment. However, isolated subject-oriented designs prevented efficient cross-domain querying of datasets. Additionally, the tools provided often did not enable effective data discovery as they struggled to resolve the complex underlying normalised structures providing poor data access speeds. Users developed bespoke access tools to structures they did not fully understand sometimes delivering them incorrect results. Therefore, BGS has developed PropBase, a generic denormalised data structure within an RDBMS to store property data, to facilitate rapid and standardised data discovery and access, incorporating 2D and 3D physical and chemical property data, with associated metadata. This includes scripts to populate and synchronise the layer with its data sources through structured input and transcription standards. A core component of the architecture includes, an optimised query object, to deliver geoscience information from a structure equivalent to a data warehouse. This enables optimised query performance to deliver data in multiple standardised formats using a web discovery tool. Semantic interoperability is enforced through vocabularies combined from all data sources facilitating searching of related terms. PropBase holds 28.1 million spatially enabled property data points from 10 source databases incorporating over 50 property data types with a vocabulary set that includes 557 property terms. By enabling property data searches across multiple databases PropBase has facilitated new scientific research, previously considered impractical. PropBase is easily extended to incorporate 4D data (time series) and is providing a baseline for new "big data" monitoring projects.

  2. MicroUse: The Database on Microcomputer Applications in Libraries and Information Centers.

    ERIC Educational Resources Information Center

    Chen, Ching-chih; Wang, Xiaochu

    1984-01-01

    Describes MicroUse, a microcomputer-based database on microcomputer applications in libraries and information centers which was developed using relational database manager dBASE II. The description includes its system configuration, software utilized, the in-house-developed dBASE programs, multifile structure, basic functions, MicroUse records,…

  3. Report on Approaches to Database Translation. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard; Salazar, Sandra

    This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…

  4. The Database Business: Managing Today--Planning for Tomorrow. Issues and Futures.

    ERIC Educational Resources Information Center

    Aitchison, T. M.; And Others

    1988-01-01

    Current issues and the future of the database business are discussed in five papers. Topics covered include aspects relating to the quality of database production; international ownership in the U.S. information marketplace; an overview of pricing strategies in the electronic information industry; and pricing issues from the viewpoints of online…

  5. Using SQL Databases for Sequence Similarity Searching and Analysis.

    PubMed

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  6. Data Rods: High Speed, Time-Series Analysis of Massive Cryospheric Data Sets Using Object-Oriented Database Methods

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Gallaher, D. W.; Grant, G.; Lv, Q.

    2011-12-01

    Change over time, is the central driver of climate change detection. The goal is to diagnose the underlying causes, and make projections into the future. In an effort to optimize this process we have developed the Data Rod model, an object-oriented approach that provides the ability to query grid cell changes and their relationships to neighboring grid cells through time. The time series data is organized in time-centric structures called "data rods." A single data rod can be pictured as the multi-spectral data history at one grid cell: a vertical column of data through time. This resolves the long-standing problem of managing time-series data and opens new possibilities for temporal data analysis. This structure enables rapid time- centric analysis at any grid cell across multiple sensors and satellite platforms. Collections of data rods can be spatially and temporally filtered, statistically analyzed, and aggregated for use with pattern matching algorithms. Likewise, individual image pixels can be extracted to generate multi-spectral imagery at any spatial and temporal location. The Data Rods project has created a series of prototype databases to store and analyze massive datasets containing multi-modality remote sensing data. Using object-oriented technology, this method overcomes the operational limitations of traditional relational databases. To demonstrate the speed and efficiency of time-centric analysis using the Data Rods model, we have developed a sea ice detection algorithm. This application determines the concentration of sea ice in a small spatial region across a long temporal window. If performed using traditional analytical techniques, this task would typically require extensive data downloads and spatial filtering. Using Data Rods databases, the exact spatio-temporal data set is immediately available No extraneous data is downloaded, and all selected data querying occurs transparently on the server side. Moreover, fundamental statistical calculations such as running averages are easily implemented against the time-centric columns of data.

  7. McMaster Optimal Aging Portal: an evidence-based database for geriatrics-focused health professionals.

    PubMed

    Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J

    2017-07-11

    The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.

  8. Spatial Databases

    DTIC Science & Technology

    2007-09-19

    extended object relations such as boundary, interior, open, closed , within, connected, and overlaps, which are invariant under elastic deformation...is required in a geo-spatial semantic web is challenging because the defining properties of geographic entities are very closely related to space. In...Objects under Primitive will be open (i.e., they will not contain their boundary points) and the objects under Complex will be closed . In addition to

  9. The Role of IMAT Solutions for Training Development at the Royal Netherlands Air Force. IMAT Follow-up Research Part 1

    DTIC Science & Technology

    2005-09-01

    e.g. the transformation of a fragment to an instructional fragment. "* IMAT Database: A Jasmine ® database is used as central database in IMAT for the...storage of fragments. This is an object-oriented relational database. Jasmine ® was, amongst other factors, chosen for its ability to handle multimedia...to the Jasmine ® database, which is used in IMAT as central database. 3.1.1.1 Ontologies In IMAT, the proposed solution on problems with information

  10. WittyFit—Live Your Work Differently: Study Protocol for a Workplace-Delivered Health Promotion

    PubMed Central

    Duclos, Martine; Naughton, Geraldine; Dewavrin, Samuel; Cornet, Thomas; Huguet, Pascal; Chatard, Jean-Claude; Pereira, Bruno

    2017-01-01

    Background Morbidity before retirement has a huge cost, burdening both public health and workplace finances. Multiple factors increase morbidity such as stress at work, sedentary behavior or low physical activity, and poor nutrition practices. Nowadays, the digital world offers infinite opportunities to interact with workers. The WittyFit software was designed to understand holistic issues of workers by promoting individualized behavior changes at the workplace. Objective The shorter term feasibility objective is to demonstrate that effective use of WittyFit will increase well-being and improve health-related behaviors. The mid-term objective is to demonstrate that WittyFit improves economic data of the companies such as productivity and benefits. The ultimate objective is to increase life expectancy of workers. Methods This is an exploratory interventional cohort study in an ecological situation. Three groups of participants will be purposefully sampled: employees, middle managers, and executive managers. Four levels of engagement are planned for employees: commencing with baseline health profiling from validated questionnaires; individualized feedback based on evidence-based medicine; support for behavioral change; and formal evaluation of changes in knowledge, practices, and health outcomes over time. Middle managers will also receive anonymous feedback on problems encountered by employees, and executive top managers will have indicators by division, location, department, age, seniority, gender and occupational position. Managers will be able to introduce specific initiatives in the workplace. WittyFit is based on two databases: behavioral data (WittyFit) and medical data (WittyFit Research). Statistical analyses will incorporate morbidity and well-being data. When a worker leaves a workplace, the company documents one of three major explanations: retirement, relocation to another company, or premature death. Therefore, WittyFit will have the ability to include mortality as an outcome. WittyFit will evolve with the waves of connected objects further increasing its data accuracy. Ethical approval was obtained from the ethics committee of the University Hospital of Clermont-Ferrand, France. Results WittyFit recruitment and enrollment started in January 2016. First publications are expected to be available at the beginning of 2017. Conclusions The name WittyFit came from Witty and Fitness. The concept of WittyFit reflects the concept of health from the World Health Organization: being spiritually and physically healthy. WittyFit is a health-monitoring, health-promoting tool that may improve the health of workers and health of companies. WittyFit will evolve with the waves of connected objects further increasing its data accuracy with objective measures. WittyFit may constitute a powerful epidemiological database. Finally, the WittyFit concept may extend healthy living into the general population. Trial Registration Clinicaltrials.gov: NCT02596737; https://www.clinicaltrials.gov/ct2/show/NCT02596737 (Archived by WebCite at http://www.webcitation.org/6pM5toQ7Y) PMID:28408363

  11. SIOExplorer: Modern IT Methods and Tools for Digital Library Management

    NASA Astrophysics Data System (ADS)

    Sutton, D. W.; Helly, J.; Miller, S.; Chase, A.; Clarck, D.

    2003-12-01

    With more geoscience disciplines becoming data-driven it is increasingly important to utilize modern techniques for data, information and knowledge management. SIOExplorer is a new digital library project with 2 terabytes of oceanographic data collected over the last 50 years on 700 cruises by the Scripps Institution of Oceanography. It is built using a suite of information technology tools and methods that allow for an efficient and effective digital library management system. The library consists of a number of independent collections, each with corresponding metadata formats. The system architecture allows each collection to be built and uploaded based on a collection dependent metadata template file (MTF). This file is used to create the hierarchical structure of the collection, create metadata tables in a relational database, and to populate object metadata files and the collection as a whole. Collections are comprised of arbitrary digital objects stored at the San Diego Supercomputer Center (SDSC) High Performance Storage System (HPSS) and managed using the Storage Resource Broker (SRB), data handling middle ware developed at SDSC. SIOExplorer interoperates with other collections as a data provider through the Open Archives Initiative (OAI) protocol. The user services for SIOExplorer are accessed from CruiseViewer, a Java application served using Java Web Start from the SIOExplorer home page. CruiseViewer is an advanced tool for data discovery and access. It implements general keyword and interactive geospatial search methods for the collections. It uses a basemap to georeference search results on user selected basemaps such as global topography or crustal age. User services include metadata viewing, opening of selective mime type digital objects (such as images, documents and grid files), and downloading of objects (including the brokering of proprietary hold restrictions).

  12. TEJAS - TELEROBOTICS/EVA JOINT ANALYSIS SYSTEM VERSION 1.0

    NASA Technical Reports Server (NTRS)

    Drews, M. L.

    1994-01-01

    The primary objective of space telerobotics as a research discipline is the augmentation and/or support of extravehicular activity (EVA) with telerobotic activity; this allows increased emplacement of on-orbit assets while providing for their "in situ" management. Development of the requisite telerobot work system requires a well-understood correspondence between EVA and telerobotics that to date has been only partially established. The Telerobotics/EVA Joint Analysis Systems (TEJAS) hypermedia information system uses object-oriented programming to bridge the gap between crew-EVA and telerobotics activities. TEJAS Version 1.0 contains twenty HyperCard stacks that use a visual, customizable interface of icon buttons, pop-up menus, and relational commands to store, link, and standardize related information about the primitives, technologies, tasks, assumptions, and open issues involved in space telerobot or crew EVA tasks. These stacks are meant to be interactive and can be used with any database system running on a Macintosh, including spreadsheets, relational databases, word-processed documents, and hypermedia utilities. The software provides a means for managing volumes of data and for communicating complex ideas, relationships, and processes inherent to task planning. The stack system contains 3MB of data and utilities to aid referencing, discussion, communication, and analysis within the EVA and telerobotics communities. The six baseline analysis stacks (EVATasks, EVAAssume, EVAIssues, TeleTasks, TeleAssume, and TeleIssues) work interactively to manage and relate basic information which you enter about the crew-EVA and telerobot tasks you wish to analyze in depth. Analysis stacks draw on information in the Reference stacks as part of a rapid point-and-click utility for building scripts of specific task primitives or for any EVA or telerobotics task. Any or all of these stacks can be completely incorporated within other hypermedia applications, or they can be referenced as is, without requiring data to be transferred into any other database. TEJAS is simple to use and requires no formal training. Some knowledge of HyperCard is helpful, but not essential. All Help cards printed in the TEJAS User's Guide are part of the TEJAS Help Stack and are available from a pop-up menu any time you are using TEJAS. Specific stacks created in TEJAS can be exchanged between groups, divisions, companies, or centers for complete communication of fundamental information that forms the basis for further analyses. TEJAS runs on any Apple Macintosh personal computer with at least one megabyte of RAM, a hard disk, and HyperCard 1.21, or later version. TEJAS is a copyrighted work with all copyright vested in NASA. HyperCard and Macintosh are registered trademarks of Apple Computer, Inc.

  13. The Binding Database: data management and interface design.

    PubMed

    Chen, Xi; Lin, Yuhmei; Liu, Ming; Gilson, Michael K

    2002-01-01

    The large and growing body of experimental data on biomolecular binding is of enormous value in developing a deeper understanding of molecular biology, in developing new therapeutics, and in various molecular design applications. However, most of these data are found only in the published literature and are therefore difficult to access and use. No existing public database has focused on measured binding affinities and has provided query capabilities that include chemical structure and sequence homology searches. We have created Binding DataBase (BindingDB), a public, web-accessible database of measured binding affinities. BindingDB is based upon a relational data specification for describing binding measurements via Isothermal Titration Calorimetry (ITC) and enzyme inhibition. A corresponding XML Document Type Definition (DTD) is used to create and parse intermediate files during the on-line deposition process and will also be used for data interchange, including collection of data from other sources. The on-line query interface, which is constructed with Java Servlet technology, supports standard SQL queries as well as searches for molecules by chemical structure and sequence homology. The on-line deposition interface uses Java Server Pages and JavaBean objects to generate dynamic HTML and to store intermediate results. The resulting data resource provides a range of functionality with brisk response-times, and lends itself well to continued development and enhancement.

  14. Information integration for a sky survey by data warehousing

    NASA Astrophysics Data System (ADS)

    Luo, A.; Zhang, Y.; Zhao, Y.

    The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line

  15. Clinical Databases for Chest Physicians.

    PubMed

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  16. A UML Profile for Developing Databases that Conform to the Third Manifesto

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki

    The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.

  17. Heterogeneous distributed databases: A case study

    NASA Technical Reports Server (NTRS)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  18. Importance of Data Management in a Long-term Biological Monitoring Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sigurd W; Brandt, Craig C; McCracken, Kitty

    2011-01-01

    The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program. An existing relational database was adapted and extended to handle biological data. Data modeling enabled the program's database to process, store, and retrievemore » its data. The data base's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. There are limitations. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.« less

  19. Importance of Data Management in a Long-Term Biological Monitoring Program

    NASA Astrophysics Data System (ADS)

    Christensen, Sigurd W.; Brandt, Craig C.; McCracken, Mary K.

    2011-06-01

    The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to meeting this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program when an existing relational database was adapted and extended to handle biological data. The database's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. We also discuss some limitations to our implementation. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.

  20. A lake-centric geospatial database to guide research and inform management decisions in an Arctic watershed in northern Alaska experiencing climate and land-use changes

    USGS Publications Warehouse

    Jones, Benjamin M.; Arp, Christopher D.; Whitman, Matthew S.; Nigro, Debora A.; Nitze, Ingmar; Beaver, John; Gadeke, Anne; Zuck, Callie; Liljedahl, Anna K.; Daanen, Ronald; Torvinen, Eric; Fritz, Stacey; Grosse, Guido

    2017-01-01

    Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.

  1. An object-oriented, technology-adaptive information model

    NASA Technical Reports Server (NTRS)

    Anyiwo, Joshua C.

    1995-01-01

    The primary objective was to develop a computer information system for effectively presenting NASA's technologies to American industries, for appropriate commercialization. To this end a comprehensive information management model, applicable to a wide variety of situations, and immune to computer software/hardware technological gyrations, was developed. The model consists of four main elements: a DATA_STORE, a data PRODUCER/UPDATER_CLIENT and a data PRESENTATION_CLIENT, anchored to a central object-oriented SERVER engine. This server engine facilitates exchanges among the other model elements and safeguards the integrity of the DATA_STORE element. It is designed to support new technologies, as they become available, such as Object Linking and Embedding (OLE), on-demand audio-video data streaming with compression (such as is required for video conferencing), Worldwide Web (WWW) and other information services and browsing, fax-back data requests, presentation of information on CD-ROM, and regular in-house database management, regardless of the data model in place. The four components of this information model interact through a system of intelligent message agents which are customized to specific information exchange needs. This model is at the leading edge of modern information management models. It is independent of technological changes and can be implemented in a variety of ways to meet the specific needs of any communications situation. This summer a partial implementation of the model has been achieved. The structure of the DATA_STORE has been fully specified and successfully tested using Microsoft's FoxPro 2.6 database management system. Data PRODUCER/UPDATER and PRESENTATION architectures have been developed and also successfully implemented in FoxPro; and work has started on a full implementation of the SERVER engine. The model has also been successfully applied to a CD-ROM presentation of NASA's technologies in support of Langley Research Center's TAG efforts.

  2. Protein Bioinformatics Databases and Resources

    PubMed Central

    Chen, Chuming; Huang, Hongzhan; Wu, Cathy H.

    2017-01-01

    Many publicly available data repositories and resources have been developed to support protein related information management, data-driven hypothesis generation and biological knowledge discovery. To help researchers quickly find the appropriate protein related informatics resources, we present a comprehensive review (with categorization and description) of major protein bioinformatics databases in this chapter. We also discuss the challenges and opportunities for developing next-generation protein bioinformatics databases and resources to support data integration and data analytics in the Big Data era. PMID:28150231

  3. Design and Implementation of an Interface Editor for the Amadeus Multi- Relational Database Front-end System

    DTIC Science & Technology

    1993-03-25

    application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data

  4. Herbal Medicine for Hot Flushes Induced by Endocrine Therapy in Women with Breast Cancer: A Systematic Review and Meta-Analysis.

    PubMed

    Li, Yuanqing; Zhu, Xiaoshu; Bensussan, Alan; Li, Pingping; Moylan, Eugene; Delaney, Geoff; McPherson, Luke

    2016-01-01

    Objective. This systematic review was conducted to evaluate the clinical effectiveness and safety of herbal medicine (HM) as an alternative management for hot flushes induced by endocrine therapy in breast cancer patients. Methods. Key English and Chinese language databases were searched from inception to July 2015. Randomized Controlled Trials (RCTs) evaluating the effects of HM on hot flushes induced by endocrine therapy in women with breast cancer were retrieved. We conducted data collection and analysis in accordance with the Cochrane Handbook for Systematic Reviews of Interventions. Statistical analysis was performed with the software (Review Manager 5.3). Results. 19 articles were selected from the articles retrieved, and 5 articles met the inclusion criteria for analysis. Some included individual studies showed that HM can relieve hot flushes as well as other menopausal symptoms induced by endocrine therapy among women with breast cancer and improve the quality of life. There are minor side effects related to HM which are well tolerated. Conclusion. Given the small number of included studies and relatively poor methodological quality, there is insufficient evidence to draw positive conclusions regarding the objective benefit of HM. Additional high quality studies are needed with more rigorous methodological approach to answer this question.

  5. Building an Ontology-driven Database for Clinical Immune Research

    PubMed Central

    Ma, Jingming

    2006-01-01

    The clinical researches of immune response usually generate a huge amount of biomedical testing data over a certain period of time. The user-friendly data management systems based on the relational database will help immunologists/clinicians to fully manage the data. On the other hand, the same biological assays such as ELISPOT and flow cytometric assays are involved in immunological experiments no matter of different study purposes. The reuse of biological knowledge is one of driving forces behind this ontology-driven data management. Therefore, an ontology-driven database will help to handle different clinical immune researches and help immunologists/clinicians easily understand the immunological data from each other. We will discuss some outlines for building an ontology-driven data management for clinical immune researches (ODMim). PMID:17238637

  6. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    PubMed

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  7. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    PubMed Central

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.

    2011-01-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443

  8. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory.

    PubMed

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M

    2011-04-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  9. Application of 3D Spatio-Temporal Data Modeling, Management, and Analysis in DB4GEO

    NASA Astrophysics Data System (ADS)

    Kuper, P. V.; Breunig, M.; Al-Doori, M.; Thomsen, A.

    2016-10-01

    Many of todaýs world wide challenges such as climate change, water supply and transport systems in cities or movements of crowds need spatio-temporal data to be examined in detail. Thus the number of examinations in 3D space dealing with geospatial objects moving in space and time or even changing their shapes in time will rapidly increase in the future. Prominent spatio-temporal applications are subsurface reservoir modeling, water supply after seawater desalination and the development of transport systems in mega cities. All of these applications generate large spatio-temporal data sets. However, the modeling, management and analysis of 3D geo-objects with changing shape and attributes in time still is a challenge for geospatial database architectures. In this article we describe the application of concepts for the modeling, management and analysis of 2.5D and 3D spatial plus 1D temporal objects implemented in DB4GeO, our service-oriented geospatial database architecture. An example application with spatio-temporal data of a landfill, near the city of Osnabrück in Germany demonstrates the usage of the concepts. Finally, an outlook on our future research focusing on new applications with big data analysis in three spatial plus one temporal dimension in the United Arab Emirates, especially the Dubai area, is given.

  10. Computerized training management system

    DOEpatents

    Rice, H.B.; McNair, R.C.; White, K.; Maugeri, T.

    1998-08-04

    A Computerized Training Management System (CTMS) is disclosed for providing a procedurally defined process that is employed to develop accreditable performance based training programs for job classifications that are sensitive to documented regulations and technical information. CTMS is a database that links information needed to maintain a five-phase approach to training-analysis, design, development, implementation, and evaluation independent of training program design. CTMS is designed using R-Base{trademark}, an-SQL compliant software platform. Information is logically entered and linked in CTMS. Each task is linked directly to a performance objective, which, in turn, is linked directly to a learning objective; then, each enabling objective is linked to its respective test items. In addition, tasks, performance objectives, enabling objectives, and test items are linked to their associated reference documents. CTMS keeps all information up to date since it automatically sorts, files and links all data; CTMS includes key word and reference document searches. 18 figs.

  11. Computerized training management system

    DOEpatents

    Rice, Harold B.; McNair, Robert C.; White, Kenneth; Maugeri, Terry

    1998-08-04

    A Computerized Training Management System (CTMS) for providing a procedurally defined process that is employed to develop accreditable performance based training programs for job classifications that are sensitive to documented regulations and technical information. CTMS is a database that links information needed to maintain a five-phase approach to training-analysis, design, development, implementation, and evaluation independent of training program design. CTMS is designed using R-Base.RTM., an-SQL compliant software platform. Information is logically entered and linked in CTMS. Each task is linked directly to a performance objective, which, in turn, is linked directly to a learning objective; then, each enabling objective is linked to its respective test items. In addition, tasks, performance objectives, enabling objectives, and test items are linked to their associated reference documents. CTMS keeps all information up to date since it automatically sorts, files and links all data; CTMS includes key word and reference document searches.

  12. A Unified Satellite-Observation Polar Stratospheric Cloud (PSC) Database for Long-Term Climate-Change Studies

    NASA Technical Reports Server (NTRS)

    Fromm, Michael; Pitts, Michael; Alfred, Jerome

    2000-01-01

    This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.

  13. Guide on Data Models in the Selection and Use of Database Management Systems. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard J.; Draper, Jesse M.

    A tutorial introduction to data models in general is provided, with particular emphasis on the relational and network models defined by the two proposed ANSI (American National Standards Institute) database language standards. Examples based on the network and relational models include specific syntax and semantics, while examples from the other…

  14. Teradata University Network: A No Cost Web-Portal for Teaching Database, Data Warehousing, and Data-Related Subjects

    ERIC Educational Resources Information Center

    Jukic, Nenad; Gray, Paul

    2008-01-01

    This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…

  15. A DBMS architecture for global change research

    NASA Astrophysics Data System (ADS)

    Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.

    1993-08-01

    The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.

  16. Ten Years Experience In Geo-Databases For Linear Facilities Risk Assessment (Lfra)

    NASA Astrophysics Data System (ADS)

    Oboni, F.

    2003-04-01

    Keywords: geo-environmental, database, ISO14000, management, decision-making, risk, pipelines, roads, railroads, loss control, SAR, hazard identification ABSTRACT: During the past decades, characterized by the development of the Risk Management (RM) culture, a variety of different RM models have been proposed by governmental agencies in various parts of the world. The most structured models appear to have originated in the field of environmental RM. These models are briefly reviewed in the first section of the paper focusing the attention on the difference between Hazard Management and Risk Management and the need to use databases in order to allow retrieval of specific information and effective updating. The core of the paper reviews a number of different RM approaches, based on extensions of geo-databases, specifically developed for linear facilities (LF) in transportation corridors since the early 90s in Switzerland, Italy, Canada, the US and South America. The applications are compared in terms of methodology, capabilities and resources necessary to their implementation. The paper then focuses the attention on the level of detail that applications and related data have to attain. Common pitfalls related to decision making based on hazards rather than on risks are discussed. The paper focuses the last sections on the description of the next generation of linear facility RA application, including examples of results and discussion of future methodological research. It is shown that geo-databases should be linked to loss control and accident reports in order to maximize their benefits. The links between RA and ISO 14000 (environmental management code) are explicitly considered.

  17. Microcomputers: Means to Achievement. Using Information More Efficiently for Special Needs Learners.

    ERIC Educational Resources Information Center

    Brown, James M.

    1984-01-01

    Discusses database management systems for special needs student information management. Considers data access policies, implementation, software selection criteria, and special needs related uses. (SK)

  18. Components of spatial information management in wildlife ecology: Software for statistical and modeling analysis [Chapter 14

    Treesearch

    Hawthorne L. Beyer; Jeff Jenness; Samuel A. Cushman

    2010-01-01

    Spatial information systems (SIS) is a term that describes a wide diversity of concepts, techniques, and technologies related to the capture, management, display and analysis of spatial information. It encompasses technologies such as geographic information systems (GIS), global positioning systems (GPS), remote sensing, and relational database management systems (...

  19. Collaborative Resource Allocation

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester

    2007-01-01

    Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.

  20. Using Distinct Sectors in Media Sampling and Full Media Analysis to Detect Presence of Documents from a Corpus

    DTIC Science & Technology

    2012-09-01

    relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software

  1. Generic Entity Resolution in Relational Databases

    NASA Astrophysics Data System (ADS)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  2. Study on parallel and distributed management of RS data based on spatial data base

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Liu, Shijin

    2006-12-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  3. The Chandra Source Catalog: Storage and Interfaces

    NASA Astrophysics Data System (ADS)

    van Stone, David; Harbo, Peter N.; Tibbetts, Michael S.; Zografou, Panagoula; Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is part of the Chandra Data Archive (CDA) at the Chandra X-ray Center. The catalog contains source properties and associated data objects such as images, spectra, and lightcurves. The source properties are stored in relational databases and the data objects are stored in files with their metadata stored in databases. The CDA supports different versions of the catalog: multiple fixed release versions and a live database version. There are several interfaces to the catalog: CSCview, a graphical interface for building and submitting queries and for retrieving data objects; a command-line interface for property and source searches using ADQL; and VO-compliant services discoverable though the VO registry. This poster describes the structure of the catalog and provides an overview of the interfaces.

  4. Systematic Review of the Use of Phytochemicals for Management of Pain in Cancer Therapy.

    PubMed

    Harrison, Andrew M; Heritier, Fabrice; Childs, Bennett G; Bostwick, J Michael; Dziadzko, Mikhail A

    2015-01-01

    Pain in cancer therapy is a common condition and there is a need for new options in therapeutic management. While phytochemicals have been proposed as one pain management solution, knowledge of their utility is limited. The objective of this study was to perform a systematic review of the biomedical literature for the use of phytochemicals for management of cancer therapy pain in human subjects. Of an initial database search of 1,603 abstracts, 32 full-text articles were eligible for further assessment. Only 7 of these articles met all inclusion criteria for this systematic review. The average relative risk of phytochemical versus control was 1.03 [95% CI 0.59 to 2.06]. In other words (although not statistically significant), patients treated with phytochemicals were slightly more likely than patients treated with control to obtain successful management of pain in cancer therapy. We identified a lack of quality research literature on this subject and thus were unable to demonstrate a clear therapeutic benefit for either general or specific use of phytochemicals in the management of cancer pain. This lack of data is especially apparent for psychotropic phytochemicals, such as the Cannabis plant (marijuana). Additional implications of our findings are also explored.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buche, D. L.

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.

  6. HYDROGEOLOGIC FOUNDATION IN SUPPORT OF ECOSYSTEM RESTORATION: BASE-FLOW LOADINGS OF NITRATE IN MID-ATLANTIC AGRICULTURAL WATERSHEDS

    EPA Science Inventory

    The study is a consortium between the U.S. Environmental Protection Agency (National Risk Management Research Laboratory) and the U.S. Geological Survey (Baltimore and Dover). The objectives of this study are: (1) to develop a geohydrological database for paired agricultural wate...

  7. Investigating why and for whom management ethnic representativeness influences interpersonal mistreatment in the workplace.

    PubMed

    Lindsey, Alex P; Avery, Derek R; Dawson, Jeremy F; King, Eden B

    2017-11-01

    Preliminary research suggests that employees use the demographic makeup of their organization to make sense of diversity-related incidents at work. The authors build on this work by examining the impact of management ethnic representativeness-the degree to which the ethnic composition of managers in an organization mirrors or is misaligned with the ethnic composition of employees in that organization. To do so, they integrate signaling theory and a sense-making perspective into a relational demography framework to investigate why and for whom management ethnic representativeness may have an impact on interpersonal mistreatment at work. Specifically, in three complementary studies, the authors examine the relationship between management ethnic representativeness and interpersonal mistreatment. First, they analyze the relationship between management ethnic representativeness and perceptions of harassment, bullying, and abuse the next year, as moderated by individuals' ethnic similarity to others in their organizations in a sample of 60,602 employees of Britain's National Health Service. Second, a constructive replication investigates perceived behavioral integrity as an explanatory mechanism that can account for the effects of representativeness using data from a nationally representative survey of working adults in the United States. Third and finally, online survey data collected at two time points replicated these patterns and further integrated the effects of representativeness and dissimilarity when they are measured using both objective and subjective strategies. Results support the authors' proposed moderated mediation model in which management ethnic representation is negatively related to interpersonal mistreatment through the mediator of perceived behavioral integrity, with effects being stronger for ethnically dissimilar employees. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. A Note on Interfacing Object Warehouses and Mass Storage Systems for Data Mining Applications

    NASA Technical Reports Server (NTRS)

    Grossman, Robert L.; Northcutt, Dave

    1996-01-01

    Data mining is the automatic discovery of patterns, associations, and anomalies in data sets. Data mining requires numerically and statistically intensive queries. Our assumption is that data mining requires a specialized data management infrastructure to support the aforementioned intensive queries, but because of the sizes of data involved, this infrastructure is layered over a hierarchical storage system. In this paper, we discuss the architecture of a system which is layered for modularity, but exploits specialized lightweight services to maintain efficiency. Rather than use a full functioned database for example, we use light weight object services specialized for data mining. We propose using information repositories between layers so that components on either side of the layer can access information in the repositories to assist in making decisions about data layout, the caching and migration of data, the scheduling of queries, and related matters.

  9. Let your fingers do the walking: The projects most invaluable tool

    NASA Technical Reports Server (NTRS)

    Zirk, Deborah A.

    1993-01-01

    The barrage of information pertaining to the software being developed for a project can be overwhelming. Current status information, as well as the statistics and history of software releases, should be 'at the fingertips' of project management and key technical personnel. This paper discusses the development, configuration, capabilities, and operation of a relational database, the System Engineering Database (SEDB) which was designed to assist management in monitoring of the tasks performed by the Network Control Center (NCC) Project. This database has proven to be an invaluable project tool and is utilized daily to support all project personnel.

  10. Techniques for assessing relative values for multiple objective management on private forests

    Treesearch

    Donald F. Dennis; Thomas H. Stevens; David B. Kittredge; Mark G. Rickenbach

    2003-01-01

    Decision models for assessing multiple objective management of private lands will require estimates of the relative values of various nonmarket outputs or objectives that have become increasingly important. In this study, conjoint techniques are used to assess the relative values and acceptable trade-offs (marginal rates of substitution) among various objectives...

  11. Data management for the internet of things: design primitives and solution.

    PubMed

    Abu-Elkheir, Mervat; Hayajneh, Mohammad; Ali, Najah Abu

    2013-11-14

    The Internet of Things (IoT) is a networking paradigm where interconnected, smart objects continuously generate data and transmit it over the Internet. Much of the IoT initiatives are geared towards manufacturing low-cost and energy-efficient hardware for these objects, as well as the communication technologies that provide objects interconnectivity. However, the solutions to manage and utilize the massive volume of data produced by these objects are yet to mature. Traditional database management solutions fall short in satisfying the sophisticated application needs of an IoT network that has a truly global-scale. Current solutions for IoT data management address partial aspects of the IoT environment with special focus on sensor networks. In this paper, we survey the data management solutions that are proposed for IoT or subsystems of the IoT. We highlight the distinctive design primitives that we believe should be addressed in an IoT data management solution, and discuss how they are approached by the proposed solutions. We finally propose a data management framework for IoT that takes into consideration the discussed design elements and acts as a seed to a comprehensive IoT data management solution. The framework we propose adapts a federated, data- and sources-centric approach to link the diverse Things with their abundance of data to the potential applications and services that are envisioned for IoT.

  12. Data Management for the Internet of Things: Design Primitives and Solution

    PubMed Central

    Abu-Elkheir, Mervat; Hayajneh, Mohammad; Ali, Najah Abu

    2013-01-01

    The Internet of Things (IoT) is a networking paradigm where interconnected, smart objects continuously generate data and transmit it over the Internet. Much of the IoT initiatives are geared towards manufacturing low-cost and energy-efficient hardware for these objects, as well as the communication technologies that provide objects interconnectivity. However, the solutions to manage and utilize the massive volume of data produced by these objects are yet to mature. Traditional database management solutions fall short in satisfying the sophisticated application needs of an IoT network that has a truly global-scale. Current solutions for IoT data management address partial aspects of the IoT environment with special focus on sensor networks. In this paper, we survey the data management solutions that are proposed for IoT or subsystems of the IoT. We highlight the distinctive design primitives that we believe should be addressed in an IoT data management solution, and discuss how they are approached by the proposed solutions. We finally propose a data management framework for IoT that takes into consideration the discussed design elements and acts as a seed to a comprehensive IoT data management solution. The framework we propose adapts a federated, data- and sources-centric approach to link the diverse Things with their abundance of data to the potential applications and services that are envisioned for IoT. PMID:24240599

  13. Precision measurements from very-large scale aerial digital imagery.

    PubMed

    Booth, D Terrance; Cox, Samuel E; Berryman, Robert D

    2006-01-01

    Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.

  14. Radio Frequency Identification (RFID) Based Employee Attendance Management System

    NASA Astrophysics Data System (ADS)

    Maramis, G. D. P.; Rompas, P. T. D.

    2018-02-01

    Manually recorded attendance of all the employees has produced some problems such as the data accuracy and staff performance efficiency. The objective of this research is to design and develop a software of RFID attendance system which is integrated with database system. This RFID attendance system was developed using several main components such as tags that will be used as a replacement of ID cards and a reader device that will read the information related to the employee attendance. The result of this project is a software of RFID attendance system that is integrated with the database and has a function to store the data or information of every single employee. This system has a maximum reading range of 2 cm with success probability of 1 and requires a minimum interval between readings of 2 seconds in order to achieve an optimal functionality. By using the system, the discipline attitude of the employees and also the performance of the staff will be improved instantly.

  15. The Marshall Islands Data Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoker, A.C.; Conrado, C.L.

    1995-09-01

    This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less

  16. Canadian Consensus Conference on Osteoporosis, 2006 Update

    PubMed Central

    Brown, Jacques P.; Fortier, Michel

    2016-01-01

    Objective To provide guidelines for the health care provider on the diagnosis and clinical management of postmenopausal osteoporosis. Outcomes Strategies for identifying and evaluating high-risk individuals, the use of bone mineral density (BMD) and bone turnover markers in assessing diagnosis and response to management, and recommendations regarding nutrition, physical activity, and the selection of pharmacologic therapy to prevent and manage osteoporosis. Evidence MEDLINE and the Cochrane database were searched for articles in English on subjects related to osteoporosis diagnosis, prevention, and management from March 2001 to April 2005. The authors critically reviewed the evidence and developed the recommendations according to the Journal of Obstetrics and Gynaecology Canada’s methodology and consensus development process. Values The quality of evidence is rated using the criteria described in the report of the Canadian Task Force on the Periodic Health Examination. Recommendations for practice are ranked according to the method described in this report. Sponsors The development of this consensus guideline was supported by unrestricted educational grants from Berlex Canada Inc., Lilly Canada, Merck Frosst, Novartis, Novogen, Novo Nordisk, Proctor and Gamble, Schering Canada, and Wyeth Canada. PMID:16626523

  17. Evaluating non-relational storage technology for HEP metadata and meta-data catalog

    NASA Astrophysics Data System (ADS)

    Grigorieva, M. A.; Golosova, M. V.; Gubin, M. Y.; Klimentov, A. A.; Osipova, V. V.; Ryabinkin, E. A.

    2016-10-01

    Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle of experiment is managed by specialized software like Distributed Data Management and Workload Management Systems. In order to be interpreted and mined, experimental data must be accompanied by auxiliary metadata, which are recorded at each data processing step. Metadata describes scientific data and represent scientific objects or results of scientific experiments, allowing them to be shared by various applications, to be recorded in databases or published via Web. Processing and analysis of constantly growing volume of auxiliary metadata is a challenging task, not simpler than the management and processing of experimental data itself. Furthermore, metadata sources are often loosely coupled and potentially may lead to an end-user inconsistency in combined information queries. To aggregate and synthesize a range of primary metadata sources, and enhance them with flexible schema-less addition of aggregated data, we are developing the Data Knowledge Base architecture serving as the intelligence behind GUIs and APIs.

  18. Design and deployment of a large brain-image database for clinical and nonclinical research

    NASA Astrophysics Data System (ADS)

    Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.

    2004-04-01

    An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.

  19. The NASA ADS Abstract Service and the Distributed Astronomy Digital Library [and] Project Soup: Comparing Evaluations of Digital Collection Efforts [and] Cross-Organizational Access Management: A Digital Library Authentication and Authorization Architecture [and] BibRelEx: Exploring Bibliographic Databases by Visualization of Annotated Content-based Relations [and] Semantics-Sensitive Retrieval for Digital Picture Libraries [and] Encoded Archival Description: An Introduction and Overview.

    ERIC Educational Resources Information Center

    Kurtz, Michael J.; Eichorn, Guenther; Accomazzi, Alberto; Grant, Carolyn S.; Demleitner, Markus; Murray, Stephen S.; Jones, Michael L. W.; Gay, Geri K.; Rieger, Robert H.; Millman, David; Bruggemann-Klein, Anne; Klein, Rolf; Landgraf, Britta; Wang, James Ze; Li, Jia; Chan, Desmond; Wiederhold, Gio; Pitti, Daniel V.

    1999-01-01

    Includes six articles that discuss a digital library for astronomy; comparing evaluations of digital collection efforts; cross-organizational access management of Web-based resources; searching scientific bibliographic databases based on content-based relations between documents; semantics-sensitive retrieval for digital picture libraries; and…

  20. WittyFit-Live Your Work Differently: Study Protocol for a Workplace-Delivered Health Promotion.

    PubMed

    Dutheil, Frédéric; Duclos, Martine; Naughton, Geraldine; Dewavrin, Samuel; Cornet, Thomas; Huguet, Pascal; Chatard, Jean-Claude; Pereira, Bruno

    2017-04-13

    Morbidity before retirement has a huge cost, burdening both public health and workplace finances. Multiple factors increase morbidity such as stress at work, sedentary behavior or low physical activity, and poor nutrition practices. Nowadays, the digital world offers infinite opportunities to interact with workers. The WittyFit software was designed to understand holistic issues of workers by promoting individualized behavior changes at the workplace. The shorter term feasibility objective is to demonstrate that effective use of WittyFit will increase well-being and improve health-related behaviors. The mid-term objective is to demonstrate that WittyFit improves economic data of the companies such as productivity and benefits. The ultimate objective is to increase life expectancy of workers. This is an exploratory interventional cohort study in an ecological situation. Three groups of participants will be purposefully sampled: employees, middle managers, and executive managers. Four levels of engagement are planned for employees: commencing with baseline health profiling from validated questionnaires; individualized feedback based on evidence-based medicine; support for behavioral change; and formal evaluation of changes in knowledge, practices, and health outcomes over time. Middle managers will also receive anonymous feedback on problems encountered by employees, and executive top managers will have indicators by division, location, department, age, seniority, gender and occupational position. Managers will be able to introduce specific initiatives in the workplace. WittyFit is based on two databases: behavioral data (WittyFit) and medical data (WittyFit Research). Statistical analyses will incorporate morbidity and well-being data. When a worker leaves a workplace, the company documents one of three major explanations: retirement, relocation to another company, or premature death. Therefore, WittyFit will have the ability to include mortality as an outcome. WittyFit will evolve with the waves of connected objects further increasing its data accuracy. Ethical approval was obtained from the ethics committee of the University Hospital of Clermont-Ferrand, France. WittyFit recruitment and enrollment started in January 2016. First publications are expected to be available at the beginning of 2017. The name WittyFit came from Witty and Fitness. The concept of WittyFit reflects the concept of health from the World Health Organization: being spiritually and physically healthy. WittyFit is a health-monitoring, health-promoting tool that may improve the health of workers and health of companies. WittyFit will evolve with the waves of connected objects further increasing its data accuracy with objective measures. WittyFit may constitute a powerful epidemiological database. Finally, the WittyFit concept may extend healthy living into the general population. Clinicaltrials.gov: NCT02596737; https://www.clinicaltrials.gov/ct2/show/NCT02596737 (Archived by WebCite at http://www.webcitation.org/6pM5toQ7Y). ©Frédéric Dutheil, Martine Duclos, Geraldine Naughton, Samuel Dewavrin, Thomas Cornet, Pascal Huguet, Jean-Claude Chatard, Bruno Pereira. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 13.04.2017.

  1. Classifying the health of Connecticut streams using benthic macroinvertebrates with implications for water management.

    PubMed

    Bellucci, Christopher J; Becker, Mary E; Beauchene, Mike; Dunbar, Lee

    2013-06-01

    Bioassessments have formed the foundation of many water quality monitoring programs throughout the United States. Like many state water quality programs, Connecticut has developed a relational database containing information about species richness, species composition, relative abundance, and feeding relationships among macroinvertebrates present in stream and river systems. Geographic Information Systems can provide estimates of landscape condition and watershed characteristics and when combined with measurements of stream biology, provide a useful visual display of information that is useful in a management context. The objective of our study was to estimate the stream health for all wadeable stream kilometers in Connecticut using a combination of macroinvertebrate metrics and landscape variables. We developed and evaluated models using an information theoretic approach to predict stream health as measured by macroinvertebrate multimetric index (MMI) and identified the best fitting model as a three variable model, including percent impervious land cover, a wetlands metric, and catchment slope that best fit the MMI scores (adj-R (2) = 0.56, SE = 11.73). We then provide examples of how modeling can augment existing programs to support water management policies under the Federal Clean Water Act such as stream assessments and anti-degradation.

  2. Review of systematic reviews on acute procedural pain in children in the hospital setting

    PubMed Central

    Stinson, Jennifer; Yamada, Janet; Dickson, Alison; Lamba, Jasmine; Stevens, Bonnie

    2008-01-01

    BACKGROUND: Acute pain is a common experience for hospitalized children. Despite mounting research on treatments for acute procedure-related pain, it remains inadequately treated. OBJECTIVE: To critically appraise all systematic reviews on the effectiveness of acute procedure-related pain management in hospitalized children. METHODS: Published systematic reviews and meta-analyses on pharmacological and nonpharmacological management of acute procedure-related pain in hospitalized children aged one to 18 years were evaluated. Electronic searches were conducted in the Cochrane Database of Systematic Reviews, Medline, EMBASE, the Cumulative Index to Nursing and Allied Health Literature and PsycINFO. Two reviewers independently selected articles for review and assessed their quality using a validated seven-point quality assessment measure. Any disagreements were resolved by a third reviewer. RESULTS: Of 1469 published articles on interventions for acute pain in hospitalized children, eight systematic reviews met the inclusion criteria and were included in the analysis. However, only five of these reviews were of high quality. Critical appraisal of pharmacological pain interventions indicated that amethocaine was superior to EMLA (AstraZeneca Canada Inc) for reducing needle pain. Distraction and hypnosis were nonpharmacological interventions effective for management of acute procedure-related pain in hospitalized children. CONCLUSIONS: There is growing evidence of rigorous evaluations of both pharmacological and nonpharmacological strategies for acute procedure-related pain in children; however, the evidence underlying some commonly used strategies is limited. The present review will enable the creation of a future research plan to facilitate clinical decision making and to develop clinical policy for managing acute procedure-related pain in children. PMID:18301816

  3. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  4. Data Model for Multi Hazard Risk Assessment Spatial Support Decision System

    NASA Astrophysics Data System (ADS)

    Andrejchenko, Vera; Bakker, Wim; van Westen, Cees

    2014-05-01

    The goal of the CHANGES Spatial Decision Support System is to support end-users in making decisions related to risk reduction measures for areas at risk from multiple hydro-meteorological hazards. The crucial parts in the design of the system are the user requirements, the data model, the data storage and management, and the relationships between the objects in the system. The implementation of the data model is carried out entirely with an open source database management system with a spatial extension. The web application is implemented using open source geospatial technologies with PostGIS as the database, Python for scripting, and Geoserver and javascript libraries for visualization and the client-side user-interface. The model can handle information from different study areas (currently, study areas from France, Romania, Italia and Poland are considered). Furthermore, the data model handles information about administrative units, projects accessible by different types of users, user-defined hazard types (floods, snow avalanches, debris flows, etc.), hazard intensity maps of different return periods, spatial probability maps, elements at risk maps (buildings, land parcels, linear features etc.), economic and population vulnerability information dependent on the hazard type and the type of the element at risk, in the form of vulnerability curves. The system has an inbuilt database of vulnerability curves, but users can also add their own ones. Included in the model is the management of a combination of different scenarios (e.g. related to climate change, land use change or population change) and alternatives (possible risk-reduction measures), as well as data-structures for saving the calculated economic or population loss or exposure per element at risk, aggregation of the loss and exposure using the administrative unit maps, and finally, producing the risk maps. The risk data can be used for cost-benefit analysis (CBA) and multi-criteria evaluation (SMCE). The data model includes data-structures for CBA and SMCE. The model is at the stage where risk and cost-benefit calculations can be stored but the remaining part is currently under development. Multi-criteria information, user management and the relation of these with the rest of the model is our next step. Having a carefully designed data model plays a crucial role in the development of the whole system for rapid development, keeping the data consistent, and in the end, support the end-user in making good decisions in risk-reduction measures related to multiple natural hazards. This work is part of the EU FP7 Marie Curie ITN "CHANGES"project (www.changes-itn.edu)

  5. Learning Asset Technology Integration Support Tool Design Document

    DTIC Science & Technology

    2010-05-11

    language known as Hypertext Preprocessor ( PHP ) and by MySQL – a relational database management system that can also be used for content management. It...Requirements The LATIST tool will be implemented utilizing a WordPress platform with MySQL as the database. Also the LATIST system must effectively work... MySQL . When designing the LATIST system there are several considerations which must be accounted for in the working prototype. These include: • DAU

  6. [Research and development of medical case database: a novel medical case information system integrating with biospecimen management].

    PubMed

    Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang

    2010-04-01

    To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.

  7. Cricket: A Mapped, Persistent Object Store

    NASA Technical Reports Server (NTRS)

    Shekita, Eugene; Zwilling, Michael

    1996-01-01

    This paper describes Cricket, a new database storage system that is intended to be used as a platform for design environments and persistent programming languages. Cricket uses the memory management primitives of the Mach operating system to provide the abstraction of a shared, transactional single-level store that can be directly accessed by user applications. In this paper, we present the design and motivation for Cricket. We also present some initial performance results which show that, for its intended applications, Cricket can provide better performance than a general-purpose database storage system.

  8. Advanced telemedicine development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; George, J.E.; Gavrilov, E.M.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop a Java-based, electronic, medical-record system that can handle multimedia data and work over a wide-area network based on open standards, and that can utilize an existing database back end. The physician is to be totally unaware that there is a database behind the scenes and is only aware that he/she can access and manage the relevant information to treat the patient.

  9. Initial experiences with building a health care infrastructure based on Java and object-oriented database technology.

    PubMed

    Dionisio, J D; Sinha, U; Dai, B; Johnson, D B; Taira, R K

    1999-01-01

    A multi-tiered telemedicine system based on Java and object-oriented database technology has yielded a number of practical insights and experiences on their effectiveness and suitability as implementation bases for a health care infrastructure. The advantages and drawbacks to their use, as seen within the context of the telemedicine system's development, are discussed. Overall, these technologies deliver on their early promise, with a few remaining issues that are due primarily to their relative newness.

  10. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  11. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/. PMID:25905099

  12. Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.

    PubMed

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  13. United States Army Medical Materiel Development Activity: 1997 Annual Report.

    DTIC Science & Technology

    1997-01-01

    business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was

  14. Modeling Functional Neuroanatomy for an Anatomy Information System

    PubMed Central

    Niggemann, Jörg M.; Gebert, Andreas; Schulz, Stefan

    2008-01-01

    Objective Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the “internal wiring” of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. Design The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. Measurements The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Results Internal wiring as well as functional pathways can correctly be represented and tracked. Conclusion This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems. PMID:18579841

  15. Development of a preliminary relative risk model for evaluating regional ecological conditions in the Delaware River Estuary, USA.

    PubMed

    Iannuzzi, Timothy J; Durda, Judi L; Preziosi, Damian V; Ludwig, David F; Stahl, Ralph G; DeSantis, Amanda A; Hoke, Robert A

    2010-01-01

    Effective environmental management and restoration of urbanized systems such as the Delaware River Estuary requires a holistic understanding of the relative importance of various stressor-related impacts throughout the watershed, both historical and ongoing. To that end, it is important to involve as many stakeholders as possible in the management process and to develop a system for sharing of scientific data and information, as well as effective technical tools for evaluating and disseminating the data needed to make management decisions. In this study, we describe a preliminary assessment that was undertaken to evaluate the relative risks for the variety of stressors currently operating within the Delaware Estuary using a relative risk model (RRM) framework. This model was constructed using existing data and information on the ecological conditions and stressors in the main-stem Delaware River below the head of tide at Trenton, New Jersey, USA. A large database was developed with pertinent data from a variety of library, scientific, and regulatory sources. Data were compiled, reviewed, and characterized before development of the Estuary-specific RRM. Our primary goals and objectives in developing this preliminary RRM for the Estuary were to 1) determine if the RRM framework can be adapted to a large complex estuarine system such as the Delaware River, 2) identify the issues associated with adapting the model framework to the various management issues and regional areas/habitats of the River, 3) help identify data needs and potential refinements that might be needed to more specifically quantify relative stressor risks in various areas and habitats of the Estuary to better inform future management goals/actions by Stakeholders. The key conclusions of our preliminary assessment are 1) a diverse suite of stressors is likely affecting the ecological conditions of the Delaware Estuary, 2) chemical (toxicants/contaminants) and physical (sedimentation, habitat loss) stressors were found to be on par with regards to their ranking, and 3) the RRM, in its current form, made it difficult to effectively balance the inequality in the sizes of the study subareas considered in the assessment. Management objectives and related research activities should focus on collecting the necessary data and information to further refine the RRM and assess the relative impacts of these stressors at various scales in the Estuary. By having such a framework and tool available, we believe that stakeholders within the Delaware River watershed will be able to make more informed and risk-based management decisions regarding restoration options for the Estuary.

  16. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315

  17. BioWarehouse: a bioinformatics database warehouse toolkit.

    PubMed

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  18. French database of children and adolescents with Prader-Willi syndrome

    PubMed Central

    Molinas, Catherine; Cazals, Laurent; Diene, Gwenaelle; Glattard, Melanie; Arnaud, Catherine; Tauber, Maithe

    2008-01-01

    Background Prader-Willi syndrome (PWS) is a rare multisystem genetic disease leading to severe complications mainly related to obesity. We strongly lack information on the natural history of this complex disease and on what factors are involved in its evolution and its outcome. One of the objectives of the French reference centre for Prader-Willi syndrome set-up in 2004 was to set-up a database in order to make the inventory of Prader-Willi syndrome cases and initiate a national cohort study in the area covered by the centre. Description the database includes medical data of children and adolescents with Prader-Willi syndrome, details about their management, socio-demographic data on their families, psychological data and quality of life of the parents. The tools and organisation used to ensure data collection and data quality in respect of good clinical practice procedures are discussed, and main characteristics of our Prader-Willi population at inclusion are presented. Conclusion this database covering all the aspects of PWS clinical, psychological and social profiles, including familial psychological and quality of life will be a powerful tool for retrospective studies concerning this complex and multi factorial disease and could be a basis for the design of future prospective multicentric studies. The complete database and the Stata.do files are available to any researcher wishing to use them for non-commercial purposes and can be provided upon request to the corresponding author. PMID:18831731

  19. An evidence based review of the assessment and management of penetrating neck trauma.

    PubMed

    Burgess, C A; Dale, O T; Almeyda, R; Corbridge, R J

    2012-02-01

    Although relatively uncommon, penetrating neck trauma has the potential for serious morbidity and an estimated mortality of up to 6%. The assessment and management of patients who have sustained a penetrating neck injury has historically been an issue surrounded by significant controversy. OBJECTIVES OF REVIEW: To assess recent evidence relating to the assessment and management of penetrating neck trauma, highlighting areas of controversy with an overall aim of formulating clinical guidelines according to a care pathway format. Structured, non-systematic review of recent medical literature. An electronic literature search was performed in May 2011. The Medline database was searched using the Medical Subject Headings terms 'neck injuries' and 'wounds, penetrating' in conjunction with the terms 'assessment' or 'management'. Embase was searched with the terms 'penetrating trauma' and 'neck injury', also in conjunction with the terms 'assessment' and 'management'. Results were limited to articles published in English from 1990 to the present day. Abstracts were reviewed by the first three authors to select full-text articles for further critical appraisal. The references and citation links of these articles were hand-searched to identify further articles of relevance. 147 relevant articles were identified by the electronic literature search, comprising case series, case reports and reviews. 33 were initially selected for further evaluation. Although controversy continues to surround the management of penetrating neck trauma, the role of selective non-operative management and the utility of CT angiography to investigate potential vascular injuries appears to be increasingly accepted. © 2011 Blackwell Publishing Ltd.

  20. A combined strategy involving Sanger and 454 pyrosequencing increases genomic resources to aid in the management of reproduction, disease control and genetic selection in the turbot (Scophthalmus maximus).

    PubMed

    Ribas, Laia; Pardo, Belén G; Fernández, Carlos; Alvarez-Diós, José Antonio; Gómez-Tato, Antonio; Quiroga, María Isabel; Planas, Josep V; Sitjà-Bobadilla, Ariadna; Martínez, Paulino; Piferrer, Francesc

    2013-03-15

    Genomic resources for plant and animal species that are under exploitation primarily for human consumption are increasingly important, among other things, for understanding physiological processes and for establishing adequate genetic selection programs. Current available techniques for high-throughput sequencing have been implemented in a number of species, including fish, to obtain a proper description of the transcriptome. The objective of this study was to generate a comprehensive transcriptomic database in turbot, a highly priced farmed fish species in Europe, with potential expansion to other areas of the world, for which there are unsolved production bottlenecks, to understand better reproductive- and immune-related functions. This information is essential to implement marker assisted selection programs useful for the turbot industry. Expressed sequence tags were generated by Sanger sequencing of cDNA libraries from different immune-related tissues after several parasitic challenges. The resulting database ("Turbot 2 database") was enlarged with sequences generated from a 454 sequencing run of brain-hypophysis-gonadal axis-derived RNA obtained from turbot at different development stages. The assembly of Sanger and 454 sequences generated 52,427 consensus sequences ("Turbot 3 database"), of which 23,661 were successfully annotated. A total of 1,410 sequences were confirmed to be related to reproduction and key genes involved in sex differentiation and maturation were identified for the first time in turbot (AR, AMH, SRY-related genes, CYP19A, ZPGs, STAR FSHR, etc.). Similarly, 2,241 sequences were related to the immune system and several novel key immune genes were identified (BCL, TRAF, NCK, CD28 and TOLLIP, among others). The number of genes of many relevant reproduction- and immune-related pathways present in the database was 50-90% of the total gene count of each pathway. In addition, 1,237 microsatellites and 7,362 single nucleotide polymorphisms (SNPs) were also compiled. Further, 2,976 putative natural antisense transcripts (NATs) including microRNAs were also identified. The combined sequencing strategies employed here significantly increased the turbot genomic resources available, including 34,400 novel sequences. The generated database contains a larger number of genes relevant for reproduction- and immune-associated studies, with an excellent coverage of most genes present in many relevant physiological pathways. This database also allowed the identification of many microsatellites and SNP markers that will be very useful for population and genome screening and a valuable aid in marker assisted selection programs.

  1. Migration of legacy mumps applications to relational database servers.

    PubMed

    O'Kane, K C

    2001-07-01

    An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.

  2. A Middle-Range Explanatory Theory of Self-Management Behavior for Collaborative Research and Practice.

    PubMed

    Blok, Amanda C

    2017-04-01

    To report an analysis of the concept of self-management behaviors. Self-management behaviors are typically associated with disease management, with frequent use by nurse researchers related to chronic illness management and by international health organizations for development of disease management interventions. A concept analysis was conducted within the context of Orem's self-care framework. Walker and Avant's eight-step concept analysis approach guided the analysis. Academic databases were searched for relevant literature including CIHAHL, Cochrane Databases of Systematic Reviews and Register of Controlled Trials, MEDLINE, PsycARTICLES and PsycINFO, and SocINDEX. Literature using the term "self-management behavior" and published between April 2001 and March 2015 was analyzed for attributes, antecedents, and consequences. A total of 189 journal articles were reviewed. Self-management behaviors are defined as proactive actions related to lifestyle, a problem, planning, collaborating, and mental support, as well as reactive actions related to a circumstantial change, to achieve a goal influenced by the antecedents of physical, psychological, socioeconomic, and cultural characteristics, as well as collaborative and received support. The theoretical definition and middle-range explanatory theory of self-management behaviors will guide future collaborative research and clinical practice for disease management. © 2016 Wiley Periodicals, Inc.

  3. Interconnecting heterogeneous database management systems

    NASA Technical Reports Server (NTRS)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  4. Lessons Learned With a Global Graph and Ozone Widget Framework (OWF) Testbed

    DTIC Science & Technology

    2013-05-01

    of operating system and database environments. The following is one example. Requirements are: Java 1.6 + and a Relational Database Management...We originally tried to use MySQL as our database, because we were more familiar with it, but since the database dumps as well as most of the...Global Graph Rest Services In order to set up the Global Graph Rest Services, you will need to have the following dependencies installed: Java 1.6

  5. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  6. Knowledge-Based Vision Techniques for the Autonomous Land Vehicle Program

    DTIC Science & Technology

    1991-10-01

    Knowledge System The CKS is an object-oriented knowledge database that was originally designed to serve as the central information manager for a...34 Representation Space: An Approach to the Integra- tion of Visual Information ," Proc. of DARPA Image Understanding Workshop, Palo Alto, CA, pp. 263-272, May 1989...Strat, " Information Management in a Sensor-Based Au- tonomous System," Proc. DARPA Image Understanding Workshop, University of Southern CA, Vol.1, pp

  7. Design and Development of a Clinical Risk Management Tool Using Radio Frequency Identification (RFID)

    PubMed Central

    Pourasghar, Faramarz; Tabrizi, Jafar Sadegh; Yarifard, Khadijeh

    2016-01-01

    Background: Patient safety is one of the most important elements of quality of healthcare. It means preventing any harm to the patients during medical care process. Objective: This paper introduces a cost-effective tool in which the Radio Frequency Identification (RFID) technology is used to identify medical errors in hospital. Methods: The proposed clinical error management system (CEMS) is consisted of a reader device, a transfer/receiver device, a database and managing software. The reader device works using radio waves and is wireless. The reader sends and receives data to/from the database via the transfer/receiver device which is connected to the computer via USB port. The database contains data about patients’ medication orders. Results: The CEMS has the ability to identify the clinical errors before they occur and then warns the care-giver with voice and visual messages to prevent the error. This device reduces the errors and thus improves the patient safety. Conclusion: A new tool including software and hardware was developed in this study. Application of this tool in clinical settings can help the nurses prevent medical errors. It can also be a useful tool for clinical risk management. Using this device can improve the patient safety to a considerable extent and thus improve the quality of healthcare. PMID:27147802

  8. Managing Automation: A Process, Not a Project.

    ERIC Educational Resources Information Center

    Hoffmann, Ellen

    1988-01-01

    Discussion of issues in management of library automation includes: (1) hardware, including systems growth and contracts; (2) software changes, vendor relations, local systems, and microcomputer software; (3) item and authority databases; (4) automation and library staff, organizational structure, and managing change; and (5) environmental issues,…

  9. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    NASA Astrophysics Data System (ADS)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  10. Health information and communication system for emergency management in a developing country, Iran.

    PubMed

    Seyedin, Seyed Hesam; Jamali, Hamid R

    2011-08-01

    Disasters are fortunately rare occurrences. However, accurate and timely information and communication are vital to adequately prepare individual health organizations for such events. The current article investigates the health related communication and information systems for emergency management in Iran. A mixed qualitative and quantitative methodology was used in this study. A sample of 230 health service managers was surveyed using a questionnaire and 65 semi-structured interviews were also conducted with public health and therapeutic affairs managers who were responsible for emergency management. A range of problems were identified including fragmentation of information, lack of local databases, lack of clear information strategy and lack of a formal system for logging disaster related information at regional or local level. Recommendations were made for improving the national emergency management information and communication system. The findings have implications for health organizations in developing and developed countries especially in the Middle East. Creating disaster related information databases, creating protocols and standards, setting an information strategy, training staff and hosting a center for information system in the Ministry of Health to centrally manage and share the data could improve the current information system.

  11. User's guide to FBASE: Relational database software for managing R1/R4 (Northern/Intermountain Regions) fish habitat inventory data

    Treesearch

    Sherry P. Wollrab

    1999-01-01

    FBASE is a microcomputer relational database package that handles data collected using the R1/R4 Fish and Fish Habitat Standard Inventory Procedures (Overton and others 1997). FBASE contains standard data entry screens, data validations for quality control, data maintenance features, and summary report options. This program also prepares data for importation into an...

  12. Cost-Effectiveness of Social Work Services in Aging: An Updated Systematic Review

    ERIC Educational Resources Information Center

    Rizzo, Victoria M.; Rowe, Jeannine M.

    2016-01-01

    Objectives: This study examines the impact of social work interventions in aging on quality of life (QOL) and cost outcomes in four categories (health, mental health, geriatric evaluation and management, and caregiving). Methods: Systematic review methods are employed. Databases were searched for articles published in English between 2004 and 2012…

  13. Ownership Patterns on Hardwood Lands

    Treesearch

    Sam C. Doak

    1991-01-01

    The pattern of land use and ownership—including sizes of ownerships, types of landowners, and their various management objectives—affects the level of benefits available from the California's hardwood resource. Geographic information system (GIS) databases of ownership and population characteristics were developed for two California counties using local assessors...

  14. Usefulness of Canadian Public Health Insurance Administrative Databases to Assess Breast and Ovarian Cancer Screening Imaging Technologies for BRCA1/2 Mutation Carriers.

    PubMed

    Larouche, Geneviève; Chiquette, Jocelyne; Plante, Marie; Pelletier, Sylvie; Simard, Jacques; Dorval, Michel

    2016-11-01

    In Canada, recommendations for clinical management of hereditary breast and ovarian cancer among individuals carrying a deleterious BRCA1 or BRCA2 mutation have been available since 2007. Eight years later, very little is known about the uptake of screening and risk-reduction measures in this population. Because Canada's public health care system falls under provincial jurisdictions, using provincial health care administrative databases appears a valuable option to assess management of BRCA1/2 mutation carriers. The objective was to explore the usefulness of public health insurance administrative databases in British Columbia, Ontario, and Quebec to assess management after BRCA1/2 genetic testing. Official public health insurance documents were considered potentially useful if they had specific procedure codes, and pertained to procedures performed in the public and private health care systems. All 3 administrative databases have specific procedures codes for mammography and breast ultrasounds. Only Quebec and Ontario have a specific procedure code for breast magnetic resonance imaging. It is impossible to assess, on an individual basis, the frequency of others screening exams, with the exception of CA-125 testing in British Columbia. Screenings done in private practice are excluded from the administrative databases unless covered by special agreements for reimbursement, such as all breast imaging exams in Ontario and mammograms in British Columbia and Quebec. There are no specific procedure codes for risk-reduction surgeries for breast and ovarian cancer. Population-based assessment of breast and ovarian cancer risk management strategies other than mammographic screening, using only administrative data, is currently challenging in the 3 Canadian provinces studied. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  15. SORTEZ: a relational translator for NCBI's ASN.1 database.

    PubMed

    Hart, K W; Searls, D B; Overton, G C

    1994-07-01

    The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.

  16. Difficulties and challenges associated with literature searches in operating room management, complete with recommendations.

    PubMed

    Wachtel, Ruth E; Dexter, Franklin

    2013-12-01

    The purpose of this article is to teach operating room managers, financial analysts, and those with a limited knowledge of search engines, including PubMed, how to locate articles they need in the areas of operating room and anesthesia group management. Many physicians are unaware of current literature in their field and evidence-based practices. The most common source of information is colleagues. Many people making management decisions do not read published scientific articles. Databases such as PubMed are available to search for such articles. Other databases, such as citation indices and Google Scholar, can be used to uncover additional articles. Nevertheless, most people who do not know how to use these databases are reluctant to utilize help resources when they do not know how to accomplish a task. Most people are especially reluctant to use on-line help files. Help files and search databases are often difficult to use because they have been designed for users already familiar with the field. The help files and databases have specialized vocabularies unique to the application. MeSH terms in PubMed are not useful alternatives for operating room management, an important limitation, because MeSH is the default when search terms are entered in PubMed. Librarians or those trained in informatics can be valuable assets for searching unusual databases, but they must possess the domain knowledge relative to the subject they are searching. The search methods we review are especially important when the subject area (e.g., anesthesia group management) is so specific that only 1 or 2 articles address the topic of interest. The materials are presented broadly enough that the reader can extrapolate the findings to other areas of clinical and management issues in anesthesiology.

  17. [Algorithms for the identification of hospital stays due to osteoporotic femoral neck fractures in European medical administrative databases using ICD-10 codes: A non-systematic review of the literature].

    PubMed

    Caillet, P; Oberlin, P; Monnet, E; Guillon-Grammatico, L; Métral, P; Belhassen, M; Denier, P; Banaei-Bouchareb, L; Viprey, M; Biau, D; Schott, A-M

    2017-10-01

    Osteoporotic hip fractures (OHF) are associated with significant morbidity and mortality. The French medico-administrative database (SNIIRAM) offers an interesting opportunity to improve the management of OHF. However, the validity of studies conducted with this database relies heavily on the quality of the algorithm used to detect OHF. The aim of the REDSIAM network is to facilitate the use of the SNIIRAM database. The main objective of this study was to present and discuss several OHF-detection algorithms that could be used with this database. A non-systematic literature search was performed. The Medline database was explored during the period January 2005-August 2016. Furthermore, a snowball search was then carried out from the articles included and field experts were contacted. The extraction was conducted using the chart developed by the REDSIAM network's "Methodology" task force. The ICD-10 codes used to detect OHF are mainly S72.0, S72.1, and S72.2. The performance of these algorithms is at best partially validated. Complementary use of medical and surgical procedure codes would affect their performance. Finally, few studies described how they dealt with fractures of non-osteoporotic origin, re-hospitalization, and potential contralateral fracture cases. Authors in the literature encourage the use of ICD-10 codes S72.0 to S72.2 to develop algorithms for OHF detection. These are the codes most frequently used for OHF in France. Depending on the study objectives, other ICD10 codes and medical and surgical procedures could be usefully discussed for inclusion in the algorithm. Detection and management of duplicates and non-osteoporotic fractures should be considered in the process. Finally, when a study is based on such an algorithm, all these points should be precisely described in the publication. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.

    PubMed

    Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H

    2009-01-01

    Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.

  19. Adapting a geographical information system-based water resource management to the needs of the Romanian water authorities.

    PubMed

    Soutter, Marc; Alexandrescu, Maria; Schenk, Colin; Drobot, Radu

    2009-08-01

    The need for global and integrated approaches to water resources management, both from the quantitative and the qualitative point of view, has long been recognized. Water quality management is a major issue for sustainable development and a mandatory task with respect to the implementation of the European Water Framework Directive as well as the Swiss legislation. However, data modelling to develop relational databases and subsequent geographic information system (GIS)-based water management instruments are a rather recent and not that widespread trend. The publication of overall guidelines for data modelling along with the EU Water Framework Directive is an important milestone in this area. Improving overall water quality requires better and more easily accessible data, but also the possibility to link data to simulation models. Models are to be used to derive indicators that will in turn support decision-making processes. For this whole chain to become effective at a river basin scale, all its components have to become part of the current daily practice of the local water administration. Any system, tool, or instrument that is not designed to meet, first of all, the fundamental needs of its primary end-users has almost no chance to be successful in the longer term. Although based on a pre-existing water resources management system developed in Switzerland, the methodological approach applied to develop a GIS-based water quality management system adapted to the Romanian context followed a set of well-defined steps: the first and very important step is the assessment of needs (on the basis of a careful analysis of the various activities and missions of the water administration and other relevant stakeholders in water management related issues). On that basis, a conceptual data model (CDM) can be developed, to be later on turned into a physical database. Finally, the specifically requested additional functionalities (i.e. functionalities not provided by classical commercial GIS software), also identified during the assessment of needs, are developed. This methodology was applied, on an experimental basin, in the Ialomita River basin. The results obtained from this action-research project consist of a set of tangible elements, among which (1) a conceptual data model adapted to the Romanian specificities regarding water resources management (needs, data availability, etc.), (2) a related spatial relational database (objects and attributes in tables, links, etc.), that can be used to store the data collected, among others, by the water administration, and later on exploited with geographical information systems, (3) a toolbar (in the ESRI environment) offering the requested data processing and visualizing functionalities. Lessons learned from this whole process can be considered as additional, although less tangible, results. The applied methodology is fairly classical and did not come up with revolutionary results. Actually, the interesting aspects of this work are, on the one hand, and obviously, the fact that it produced tools matching the needs of the local (if not national) water administration (i.e. with a good chance of being effectively used in the day-to-day practice), and, on the other hand, the adaptations and adjustments that were needed both at the staff level and in technical terms. This research showed that a GIS-based water management system needs to be backed by some basic data management tools that form the necessary support upon which a GIS can be deployed. The main lesson gained is that technology transfer has to pay much attention to the differences in existing situations and backgrounds in general, and therefore must be able to show much flexibility. The fact that the original objectives could be adapted to meet the real needs of the local end-users is considered as a major aspect in achieving a successful adaptation and development of water resources management tools. Time needed to setup things in real life was probably the most underestimated aspect in this technology transfer process. The whole material produced (conceptual data model, database and GIS tools) was disseminated among all river basin authorities in Romania on the behalf of the national water administration (ANAR). The fact that further developments, for example, to address water quantity issues more precisely, as envisaged by ANAR, can be seen as an indication that this project succeeded in providing an appropriate input to improve water quality in Romania on the long term.

  20. Comparison of hospital databases on antibiotic consumption in France, for a single management tool.

    PubMed

    Henard, S; Boussat, S; Demoré, B; Clément, S; Lecompte, T; May, T; Rabaud, C

    2014-07-01

    The surveillance of antibiotic use in hospitals and of data on resistance is an essential measure for antibiotic stewardship. There are 3 national systems in France to collect data on antibiotic use: DREES, ICATB, and ATB RAISIN. We compared these databases and drafted recommendations for the creation of an optimized database of information on antibiotic use, available to all concerned personnel: healthcare authorities, healthcare facilities, and healthcare professionals. We processed and analyzed the 3 databases (2008 data), and surveyed users. The qualitative analysis demonstrated major discrepancies in terms of objectives, healthcare facilities, participation rate, units of consumption, conditions for collection, consolidation, and control of data, and delay before availability of results. The quantitative analysis revealed that the consumption data for a given healthcare facility differed from one database to another, challenging the reliability of data collection. We specified user expectations: to compare consumption and resistance data, to carry out benchmarking, to obtain data on the prescribing habits in healthcare units, or to help understand results. The study results demonstrated the need for a reliable, single, and automated tool to manage data on antibiotic consumption compared with resistance data on several levels (national, regional, healthcare facility, healthcare units), providing rapid local feedback and educational benchmarking. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  1. MouseNet database: digital management of a large-scale mutagenesis project.

    PubMed

    Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M

    2000-07-01

    The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.

  2. Service Management Database for DSN Equipment

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.

  3. Modeling functional neuroanatomy for an anatomy information system.

    PubMed

    Niggemann, Jörg M; Gebert, Andreas; Schulz, Stefan

    2008-01-01

    Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the "internal wiring" of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Internal wiring as well as functional pathways can correctly be represented and tracked. This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kogalovskii, M.R.

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  5. Trends in the Evolution of the Public Web, 1998-2002; The Fedora Project: An Open-source Digital Object Repository Management System; State of the Dublin Core Metadata Initiative, April 2003; Preservation Metadata; How Many People Search the ERIC Database Each Day?

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Lavoie, Brian F.; Bennett, Rick; Staples, Thornton; Wayland, Ross; Payette, Sandra; Dekkers, Makx; Weibel, Stuart; Searle, Sam; Thompson, Dave; Rudner, Lawrence M.

    2003-01-01

    Includes five articles that examine key trends in the development of the public Web: size and growth, internationalization, and metadata usage; Flexible Extensible Digital Object and Repository Architecture (Fedora) for use in digital libraries; developments in the Dublin Core Metadata Initiative (DCMI); the National Library of New Zealand Te Puna…

  6. Database computing in HEP

    NASA Technical Reports Server (NTRS)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  7. Information Security Considerations for Applications Using Apache Accumulo

    DTIC Science & Technology

    2014-09-01

    Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java

  8. Feasibility planning study for a behavior database. Volume III Appendix B, Compendium of survey questions on drinking and driving and occupant restraints

    DOT National Transportation Integrated Search

    1987-04-01

    The general objective of the project was to determine the feasibility of and the general requirements for a centralized database on driver behavior and attitudes related to drunk driving and occupant restraints. Volume III is a compendium of question...

  9. Feasibility planning study for a behavior database. Volume 2 Appendix A, Summary information on the drinking and driving and occupant restraint surveys

    DOT National Transportation Integrated Search

    1987-04-24

    The general objective of the project was to determine the feasibility of and the general requirements for a centralized database on driver behavior and attitudes related to drunk driving and occupant restraints. Volume I assesses the extent of pertin...

  10. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  11. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  12. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  13. HBVPathDB: a database of HBV infection-related molecular interaction network.

    PubMed

    Zhang, Yi; Bo, Xiao-Chen; Yang, Jing; Wang, Sheng-Qi

    2005-03-21

    To describe molecules or genes interaction between hepatitis B viruses (HBV) and host, for understanding how virus' and host's genes and molecules are networked to form a biological system and for perceiving mechanism of HBV infection. The knowledge of HBV infection-related reactions was organized into various kinds of pathways with carefully drawn graphs in HBVPathDB. Pathway information is stored with relational database management system (DBMS), which is currently the most efficient way to manage large amounts of data and query is implemented with powerful Structured Query Language (SQL). The search engine is written using Personal Home Page (PHP) with SQL embedded and web retrieval interface is developed for searching with Hypertext Markup Language (HTML). We present the first version of HBVPathDB, which is a HBV infection-related molecular interaction network database composed of 306 pathways with 1 050 molecules involved. With carefully drawn graphs, pathway information stored in HBVPathDB can be browsed in an intuitive way. We develop an easy-to-use interface for flexible accesses to the details of database. Convenient software is implemented to query and browse the pathway information of HBVPathDB. Four search page layout options-category search, gene search, description search, unitized search-are supported by the search engine of the database. The database is freely available at http://www.bio-inf.net/HBVPathDB/HBV/. The conventional perspective HBVPathDB have already contained a considerable amount of pathway information with HBV infection related, which is suitable for in-depth analysis of molecular interaction network of virus and host. HBVPathDB integrates pathway data-sets with convenient software for query, browsing, visualization, that provides users more opportunity to identify regulatory key molecules as potential drug targets and to explore the possible mechanism of HBV infection based on gene expression datasets.

  14. A Database System for Course Administration.

    ERIC Educational Resources Information Center

    Benbasat, Izak; And Others

    1982-01-01

    Describes a computer-assisted testing system which produces multiple-choice examinations for a college course in business administration. The system uses SPIRES (Stanford Public Information REtrieval System) to manage a database of questions and related data, mark-sense cards for machine grading tests, and ACL (6) (Audit Command Language) to…

  15. Solutions for medical databases optimal exploitation.

    PubMed

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  16. Enabling Civilian Low-Altitude Airspace and Unmanned Aerial System (UAS) Operations

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal

    2014-01-01

    UAS operations will be safer if a UTM system is available to support the functions associated with Airspace management and geo-fencing (reduce risk of accidents, impact to other operations, and community concerns); Weather and severe wind integration (avoid severe weather areas based on prediction); Predict and manage congestion (mission safety);Terrain and man-made objects database and avoidance; Maintain safe separation (mission safety and assurance of other assets); Allow only authenticated operations (avoid unauthorized airspace use).

  17. Evaluating the operations capability of Freedom's Data Management System

    NASA Technical Reports Server (NTRS)

    Sowizral, Henry A.

    1990-01-01

    Three areas of Data Management System (DMS) performance are examined: raw processor speed, the subjective speed of the Lynx OS X-Window system, and the operational capacity of the Runtime Object Database (RODB). It is concluded that the proposed processor will operate at its specified rate of speed and that the X-Window system operates within users' subjective needs. It is also concluded that the RODB cannot provide the required level of service, even with a two-order of magnitude (100 fold) improvement in speed.

  18. Data management for community research projects: A JGOFS case study

    NASA Technical Reports Server (NTRS)

    Lowry, Roy K.

    1992-01-01

    Since the mid 1980s, much of the marine science research effort in the United Kingdom has been focused into large scale collaborative projects involving public sector laboratories and university departments, termed Community Research Projects. Two of these, the Biogeochemical Ocean Flux Study (BOFS) and the North Sea Project incorporated large scale data collection to underpin multidisciplinary modeling efforts. The challenge of providing project data sets to support the science was met by a small team within the British Oceanographic Data Centre (BODC) operating as a topical data center. The role of the data center was to both work up the data from the ship's sensors and to combine these data with sample measurements into online databases. The working up of the data was achieved by a unique symbiosis between data center staff and project scientists. The project management, programming and data processing skills of the data center were combined with the oceanographic experience of the project communities to develop a system which has produced quality controlled, calibrated data sets from 49 research cruises in 3.5 years of operation. The data center resources required to achieve this were modest and far outweighed by the time liberated in the scientific community by the removal of the data processing burden. Two online project databases have been assembled containing a very high proportion of the data collected. As these are under the control of BODC their long term availability as part of the UK national data archive is assured. The success of the topical data center model for UK Community Research Project data management has been founded upon the strong working relationships forged between the data center and project scientists. These can only be established by frequent personal contact and hence the relatively small size of the UK has been a critical factor. However, projects covering a larger, even international scale could be successfully supported by a network of topical data centers managing online databases which are interconnected by object oriented distributed data management systems over wide area networks.

  19. Security Controls in the Stockpoint Logistics Integrated Communications Environment (SPLICE).

    DTIC Science & Technology

    1985-03-01

    call programs as authorized after checks by the Terminal Management Subsystem on SAS databases . SAS overlays the TANDEM GUARDIAN operating system to...Security Access Profile database (SAP) and a query capability generating various security reports. SAS operates with the System Monitor (SMON) subsystem...system to DDN and other components. The first SAS component to be reviewed is the SAP database . SAP is organized into two types of files. Relational

  20. Scrapping Patched Computer Systems: Integrated Data Processing for Information Management.

    ERIC Educational Resources Information Center

    Martinson, Linda

    1991-01-01

    Colleges and universities must find a way to streamline and integrate information management processes across the organization. The Georgia Institute of Technology responded to an acute problem of dissimilar operating systems with a campus-wide integrated administrative system using a machine independent relational database management system. (MSE)

  1. Survey of standards applicable to a database management system

    NASA Technical Reports Server (NTRS)

    Urena, J. L.

    1981-01-01

    Industry, government, and NASA standards, and the status of standardization activities of standards setting organizations applicable to the design, implementation and operation of a data base management system for space related applications are identified. The applicability of the standards to a general purpose, multimission data base management system is addressed.

  2. Intelligent community management system based on the devicenet fieldbus

    NASA Astrophysics Data System (ADS)

    Wang, Yulan; Wang, Jianxiong; Liu, Jiwen

    2013-03-01

    With the rapid development of the national economy and the improvement of people's living standards, people are making higher demands on the living environment. And the estate management content, management efficiency and service quality have been higher required. This paper in-depth analyzes about the intelligent community of the structure and composition. According to the users' requirements and related specifications, it achieves the district management systems, which includes Basic Information Management: the management level of housing, household information management, administrator-level management, password management, etc. Service Management: standard property costs, property charges collecting, the history of arrears and other property expenses. Security Management: household gas, water, electricity and security and other security management, security management district and other public places. Systems Management: backup database, restore database, log management. This article also carries out on the Intelligent Community System analysis, proposes an architecture which is based on B / S technology system. And it has achieved a global network device management with friendly, easy to use, unified human - machine interface.

  3. Cyclone: java-based querying and computing with Pathway/Genome databases.

    PubMed

    Le Fèvre, François; Smidtas, Serge; Schächter, Vincent

    2007-05-15

    Cyclone aims at facilitating the use of BioCyc, a collection of Pathway/Genome Databases (PGDBs). Cyclone provides a fully extensible Java Object API to analyze and visualize these data. Cyclone can read and write PGDBs, and can write its own data in the CycloneML format. This format is automatically generated from the BioCyc ontology by Cyclone itself, ensuring continued compatibility. Cyclone objects can also be stored in a relational database CycloneDB. Queries can be written in SQL, and in an intuitive and concise object-oriented query language, Hibernate Query Language (HQL). In addition, Cyclone interfaces easily with Java software including the Eclipse IDE for HQL edition, the Jung API for graph algorithms or Cytoscape for graph visualization. Cyclone is freely available under an open source license at: http://sourceforge.net/projects/nemo-cyclone. For download and installation instructions, tutorials, use cases and examples, see http://nemo-cyclone.sourceforge.net.

  4. Version 1.00 programmer`s tools used in constructing the INEL RML/analytical radiochemistry sample tracking database and its user interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Femec, D.A.

    This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases.

  5. Toward public volume database management: a case study of NOVA, the National Online Volumetric Archive

    NASA Astrophysics Data System (ADS)

    Fletcher, Alex; Yoo, Terry S.

    2004-04-01

    Public databases today can be constructed with a wide variety of authoring and management structures. The widespread appeal of Internet search engines suggests that public information be made open and available to common search strategies, making accessible information that would otherwise be hidden by the infrastructure and software interfaces of a traditional database management system. We present the construction and organizational details for managing NOVA, the National Online Volumetric Archive. As an archival effort of the Visible Human Project for supporting medical visualization research, archiving 3D multimodal radiological teaching files, and enhancing medical education with volumetric data, our overall database structure is simplified; archives grow by accruing information, but seldom have to modify, delete, or overwrite stored records. NOVA is being constructed and populated so that it is transparent to the Internet; that is, much of its internal structure is mirrored in HTML allowing internet search engines to investigate, catalog, and link directly to the deep relational structure of the collection index. The key organizational concept for NOVA is the Image Content Group (ICG), an indexing strategy for cataloging incoming data as a set structure rather than by keyword management. These groups are managed through a series of XML files and authoring scripts. We cover the motivation for Image Content Groups, their overall construction, authorship, and management in XML, and the pilot results for creating public data repositories using this strategy.

  6. Current Standardization and Cooperative Efforts Related to Industrial Information Infrastructures.

    DTIC Science & Technology

    1993-05-01

    Data Management Systems: Components used to store, manage, and retrieve data. Data management includes knowledge bases, database management...Application Development Tools and Methods X/Open and POSIX APIs Integrated Design Support System (IDS) Knowledge -Based Systems (KBS) Application...IDEFlx) Yourdon Jackson System Design (JSD) Knowledge -Based Systems (KBSs) Structured Systems Development (SSD) Semantic Unification Meta-Model

  7. The ERESE Project: Interfacing with the ERDA Digital Archive and ERR Reference Database in EarthRef.org

    NASA Astrophysics Data System (ADS)

    Koppers, A. A.; Staudigel, H.; Mills, H.; Keller, M.; Wallace, A.; Bachman, N.; Helly, J.; Helly, M.; Miller, S. P.; Massell Symons, C.

    2004-12-01

    To bridge the gap between Earth science teachers, librarians, scientists and data archive managers, we have started the ERESE project that will create, archive and make available "Enduring Resources in Earth Science Education" through information technology (IT) portals. In the first phase of this National Science Digital Library (NSDL) project, we are focusing on the development of these ERESE resources for middle and high school teachers to be used in lesson plans with "plate tectonics" and "magnetics" as their main theme. In this presentation, we will show how these new ERESE resources are being generated, how they can be uploaded via online web wizards, how they are archived, how we make them available via the EarthRef.org Digital Archive (ERDA) and Reference Database (ERR), and how they relate to the SIOExplorer database containing data objects for all seagoing cruises carried out by the Scripps Institution of Oceanography. The EarthRef.org web resource uses the vision of a "general description" of the Earth as a geological system to provide an IT infrastructure for the Earth sciences. This emphasizes the marriage of the "scientific process" (and its results) with an educational cyber-infrastructure for teaching Earth sciences, on any level, from middle school to college and graduate levels. Eight different databases reside under EarthRef.org from which ERDA holds any digital object that has been uploaded by other scientists, teachers and students for free, while the ERR holds more than 80,000 publications. For more than 1,500 of these publications, this latter database makes available for downloading JPG/PDF images of the abstracts, data tables, methods and appendices, together with their digitized contents in Microsoft Word and Excel format. Both holdings are being used to store the ERESE objects that are being generated by a group of undergraduate students majoring in Environmental Systems (ESYS) program at the UCSD with an emphasis on the Earth Sciences. These students perform library and internet research in order to design and generate these "Enduring Resources in Earth Science Education" that they test by closely interacting with the research faculty at the Scripps Institution of Oceanography. Typical ERESE resources can be diagrams, model cartoons, maps, data sets for analyses, and glossary items and essays to explain certain Earth Science concepts and are ready to be used in the classroom.

  8. UniGene Tabulator: a full parser for the UniGene format.

    PubMed

    Lenzi, Luca; Frabetti, Flavia; Facchin, Federica; Casadei, Raffaella; Vitale, Lorenza; Canaider, Silvia; Carinci, Paolo; Zannotti, Maria; Strippoli, Pierluigi

    2006-10-15

    UniGene Tabulator 1.0 provides a solution for full parsing of UniGene flat file format; it implements a structured graphical representation of each data field present in UniGene following import into a common database managing system usable in a personal computer. This database includes related tables for sequence, protein similarity, sequence-tagged site (STS) and transcript map interval (TXMAP) data, plus a summary table where each record represents a UniGene cluster. UniGene Tabulator enables full local management of UniGene data, allowing parsing, querying, indexing, retrieving, exporting and analysis of UniGene data in a relational database form, usable on Macintosh (OS X 10.3.9 or later) and Windows (2000, with service pack 4, XP, with service pack 2 or later) operating systems-based computers. The current release, including both the FileMaker runtime applications, is freely available at http://apollo11.isto.unibo.it/software/

  9. The MAO NASU Plate Archive Database. Current Status and Perspectives

    NASA Astrophysics Data System (ADS)

    Pakuliak, L. K.; Sergeeva, T. P.

    2006-04-01

    The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.

  10. Work Coordination Engine

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, Rachel; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    The Work Coordination Engine (WCE) is a Java application integrated into the Service Management Database (SMDB), which coordinates the dispatching and monitoring of a work order system. WCE de-queues work orders from SMDB and orchestrates the dispatching of work to a registered set of software worker applications distributed over a set of local, or remote, heterogeneous computing systems. WCE monitors the execution of work orders once dispatched, and accepts the results of the work order by storing to the SMDB persistent store. The software leverages the use of a relational database, Java Messaging System (JMS), and Web Services using Simple Object Access Protocol (SOAP) technologies to implement an efficient work-order dispatching mechanism capable of coordinating the work of multiple computer servers on various platforms working concurrently on different, or similar, types of data or algorithmic processing. Existing (legacy) applications can be wrapped with a proxy object so that no changes to the application are needed to make them available for integration into the work order system as "workers." WCE automatically reschedules work orders that fail to be executed by one server to a different server if available. From initiation to completion, the system manages the execution state of work orders and workers via a well-defined set of events, states, and actions. It allows for configurable work-order execution timeouts by work-order type. This innovation eliminates a current processing bottleneck by providing a highly scalable, distributed work-order system used to quickly generate products needed by the Deep Space Network (DSN) to support space flight operations. WCE is driven by asynchronous messages delivered via JMS indicating the availability of new work or workers. It runs completely unattended in support of the lights-out operations concept in the DSN.

  11. Self-management priority setting and decision-making in adults with multimorbidity: A narrative review of literature

    PubMed Central

    Bratzke, Lisa C.; Muehrer, Rebecca J.; Kehl, Karen A.; Lee, Kyoung Suk; Ward, Earlise C.; Kwekkeboom, Kristine L.

    2014-01-01

    Objectives The purpose of this narrative review was to synthesize current research findings related to self-management, in order to better understand the processes of priority setting and decision-making in among adults with multimorbidity. Design A narrative literature review was undertaken, synthesizing findings from published, peer-reviewed empirical studies that addressed priority setting and/or decision-making in self-management of multimorbidity. Data sources A search of PubMed, PsychINFO, CINAHL and SocIndex databases was conducted from database inception through December 2013. References lists from selected empirical studies and systematic reviews were evaluated to identify any additional relevant articles. Review methods Full text of potentially eligible articles were reviewed and selected for inclusion if they described empirical studies that addressed priority setting or decision-making in self-management of multimorbidity among adults. Two independent reviewers read each selected article and extracted relevant data to an evidence table. Processes and factors and processes of multimorbidity self-management were identified and sorted into categories of priority setting, decision-making, and facilitators/barriers. Results Thirteen articles were selected for inclusion; most were qualitative studies describing processes, facilitators, and barriers of multimorbidity self-management. The findings revealed that patients prioritize a dominant chronic illness and re-prioritize over time as conditions and treatments change; that multiple facilitators (e.g. support programs) and barriers (e.g. lack of financial resources) impact individuals’ self-management priority setting and decision-making ability; as do individual beliefs, preferences, and attitudes (e.g., perceived personal control, preferences regarding treatment). Conclusions Health care providers need to be cognizant that individuals with multimorbidity engage in day-to-day priority setting and decision-making among their multiple chronic illnesses and respective treatments. Researchers need to develop and test interventions that support day-to-day priority setting and decision-making and improve health outcomes for individuals with multimorbidity. PMID:25468131

  12. A virtual university Web system for a medical school.

    PubMed

    Séka, L P; Duvauferrier, R; Fresnel, A; Le Beux, P

    1998-01-01

    This paper describes a Virtual Medical University Web Server. This project started in 1994 by the development of the French Radiology Server. The main objective of our Medical Virtual University is to offer not only an initial training (for students) but also the Continuing Professional Education (for practitioners). Our system is based on electronic textbooks, clinical cases (around 4000) and a medical knowledge base called A.D.M. ("Aide au Diagnostic Medical"). We have indexed all electronic textbooks and clinical cases according to the ADM base in order to facilitate the navigation on the system. This system base is supported by a relational database management system. The Virtual Medical University, available on the Web Internet, is presently in the process of external evaluations.

  13. Proceedings of the Seventh International Symposium on Methodologies for Intelligent Systems (Poster Session)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.

    1993-05-01

    This report contains the following papers: Implications in vivid logic; a self-learning bayesian expert system; a natural language generation system for a heterogeneous distributed database system; competence-switching'' managed by intelligent systems; strategy acquisition by an artificial neural network: Experiments in learning to play a stochastic game; viewpoints and selective inheritance in object-oriented modeling; multivariate discretization of continuous attributes for machine learning; utilization of the case-based reasoning method to resolve dynamic problems; formalization of an ontology of ceramic science in CLASSIC; linguistic tools for intelligent systems; an application of rough sets in knowledge synthesis; and a relational model for imprecise queries.more » These papers have been indexed separately.« less

  14. Proceedings of the Seventh International Symposium on Methodologies for Intelligent Systems (Poster Session)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.

    1993-05-01

    This report contains the following papers: Implications in vivid logic; a self-learning Bayesian Expert System; a natural language generation system for a heterogeneous distributed database system; ``competence-switching`` managed by intelligent systems; strategy acquisition by an artificial neural network: Experiments in learning to play a stochastic game; viewpoints and selective inheritance in object-oriented modeling; multivariate discretization of continuous attributes for machine learning; utilization of the case-based reasoning method to resolve dynamic problems; formalization of an ontology of ceramic science in CLASSIC; linguistic tools for intelligent systems; an application of rough sets in knowledge synthesis; and a relational model for imprecise queries.more » These papers have been indexed separately.« less

  15. Improvement of DHRA-DMDC Physical Access Software DBIDS Using Cloud Computing Technology: A Case Study

    DTIC Science & Technology

    2012-06-01

    technology originally developed on the Java platform. The Hibernate framework supports rapid development of a data access layer without requiring a...31 viii 2. Hibernate ................................................................................ 31 3. Database Design...protect from security threats; o Easy aggregate management operations via file tags; 2. Hibernate We recommend using Hibernate technology for object

  16. Information Technology and the Evolution of the Library

    DTIC Science & Technology

    2009-03-01

    Resource Commons/ Repository/ Federated Search ILS (GLADIS/Pathfinder - Millenium)/ Catalog/ Circulation/ Acquisitions/ Digital Object Content...content management services to help centralize and distribute digi- tal content from across the institution, software to allow for seamless federated ... search - ing across multiple databases, and imaging software to allow for daily reimaging of ter- minals to reduce security concerns that otherwise

  17. Urban forest restoration cost modeling: a Seattle natural areas case study

    Treesearch

    Jean M. Daniels; Weston Brinkley; Michael D. Paruszkiewicz

    2016-01-01

    Cities have become more committed to ecological restoration and management activities in urban natural areas. Data about costs are needed for better planning and reporting. The objective of this study is to estimate the costs for restoration activities in urban parks and green space in Seattle, Washington. Stewardship activity data were generated from a new database...

  18. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    PubMed

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  19. OWLing Clinical Data Repositories With the Ontology Web Language.

    PubMed

    Lozano-Rubí, Raimundo; Pastor, Xavier; Lozano, Esther

    2014-08-01

    The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems.

  20. Mapping the literature of case management nursing

    PubMed Central

    White, Pamela; Hall, Marilyn E.

    2006-01-01

    Objectives: Nursing case management provides a continuum of health care services for defined groups of patients. Its literature is multidisciplinary, emphasizing clinical specialties, case management methodology, and the health care system. This study is part of a project to map the literature of nursing, sponsored by the Nursing and Allied Health Resources Section of the Medical Library Association. The study identifies core journals cited in case management literature and indexing services that access those journals. Methods: Three source journals were identified based on established criteria, and cited references from each article published from 1997 to 1999 were analyzed. Results: Nearly two-thirds of the cited references were from journals; others were from books, monographs, reports, government documents, and the Internet. Cited journal references were ranked in descending order, and Bradford's Law of Scattering was applied. The many journals constituting the top two zones reflect the diversity of this field. Zone 1 included journals from nursing administration, case management, general medicine, medical specialties, and social work. Two databases, PubMed/MEDLINE and OCLC ArticleFirst, provided the best indexing coverage. Conclusion: Collections that support case management require a relatively small group of core journals. Students and health care professionals will need to search across disciplines to identify appropriate literature. PMID:16710470

  1. Systematic Review of the Use of Phytochemicals for Management of Pain in Cancer Therapy

    PubMed Central

    Harrison, Andrew M.; Heritier, Fabrice; Childs, Bennett G.; Bostwick, J. Michael; Dziadzko, Mikhail A.

    2015-01-01

    Pain in cancer therapy is a common condition and there is a need for new options in therapeutic management. While phytochemicals have been proposed as one pain management solution, knowledge of their utility is limited. The objective of this study was to perform a systematic review of the biomedical literature for the use of phytochemicals for management of cancer therapy pain in human subjects. Of an initial database search of 1,603 abstracts, 32 full-text articles were eligible for further assessment. Only 7 of these articles met all inclusion criteria for this systematic review. The average relative risk of phytochemical versus control was 1.03 [95% CI 0.59 to 2.06]. In other words (although not statistically significant), patients treated with phytochemicals were slightly more likely than patients treated with control to obtain successful management of pain in cancer therapy. We identified a lack of quality research literature on this subject and thus were unable to demonstrate a clear therapeutic benefit for either general or specific use of phytochemicals in the management of cancer pain. This lack of data is especially apparent for psychotropic phytochemicals, such as the Cannabis plant (marijuana). Additional implications of our findings are also explored. PMID:26576425

  2. LAND-deFeND - An innovative database structure for landslides and floods and their consequences.

    PubMed

    Napolitano, Elisabetta; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Bianchi, Cinzia; Guzzetti, Fausto

    2018-02-01

    Information on historical landslides and floods - collectively called "geo-hydrological hazards - is key to understand the complex dynamics of the events, to estimate the temporal and spatial frequency of damaging events, and to quantify their impact. A number of databases on geo-hydrological hazards and their consequences have been developed worldwide at different geographical and temporal scales. Of the few available database structures that can handle information on both landslides and floods some are outdated and others were not designed to store, organize, and manage information on single phenomena or on the type and monetary value of the damages and the remediation actions. Here, we present the LANDslides and Floods National Database (LAND-deFeND), a new database structure able to store, organize, and manage in a single digital structure spatial information collected from various sources with different accuracy. In designing LAND-deFeND, we defined four groups of entities, namely: nature-related, human-related, geospatial-related, and information-source-related entities that collectively can describe fully the geo-hydrological hazards and their consequences. In LAND-deFeND, the main entities are the nature-related entities, encompassing: (i) the "phenomenon", a single landslide or local inundation, (ii) the "event", which represent the ensemble of the inundations and/or landslides occurred in a conventional geographical area in a limited period, and (iii) the "trigger", which is the meteo-climatic or seismic cause (trigger) of the geo-hydrological hazards. LAND-deFeND maintains the relations between the nature-related entities and the human-related entities even where the information is missing partially. The physical model of the LAND-deFeND contains 32 tables, including nine input tables, 21 dictionary tables, and two association tables, and ten views, including specific views that make the database structure compliant with the EC INSPIRE and the Floods Directives. The LAND-deFeND database structure is open, and freely available from http://geomorphology.irpi.cnr.it/tools. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. A blue carbon soil database: Tidal wetland stocks for the US National Greenhouse Gas Inventory

    NASA Astrophysics Data System (ADS)

    Feagin, R. A.; Eriksson, M.; Hinson, A.; Najjar, R. G.; Kroeger, K. D.; Herrmann, M.; Holmquist, J. R.; Windham-Myers, L.; MacDonald, G. M.; Brown, L. N.; Bianchi, T. S.

    2015-12-01

    Coastal wetlands contain large reservoirs of carbon, and in 2015 the US National Greenhouse Gas Inventory began the work of placing blue carbon within the national regulatory context. The potential value of a wetland carbon stock, in relation to its location, soon could be influential in determining governmental policy and management activities, or in stimulating market-based CO2 sequestration projects. To meet the national need for high-resolution maps, a blue carbon stock database was developed linking National Wetlands Inventory datasets with the USDA Soil Survey Geographic Database. Users of the database can identify the economic potential for carbon conservation or restoration projects within specific estuarine basins, states, wetland types, physical parameters, and land management activities. The database is geared towards both national-level assessments and local-level inquiries. Spatial analysis of the stocks show high variance within individual estuarine basins, largely dependent on geomorphic position on the landscape, though there are continental scale trends to the carbon distribution as well. Future plans including linking this database with a sedimentary accretion database to predict carbon flux in US tidal wetlands.

  4. An Investigation of the Fine Spatial Structure of Meteor Streams Using the Relational Database ``Meteor''

    NASA Astrophysics Data System (ADS)

    Karpov, A. V.; Yumagulov, E. Z.

    2003-05-01

    We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.

  5. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    NASA Astrophysics Data System (ADS)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  6. Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.

    1992-01-01

    Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.

  7. MetPetDB: A database for metamorphic geochemistry

    NASA Astrophysics Data System (ADS)

    Spear, Frank S.; Hallett, Benjamin; Pyle, Joseph M.; Adalı, Sibel; Szymanski, Boleslaw K.; Waters, Anthony; Linder, Zak; Pearce, Shawn O.; Fyffe, Matthew; Goldfarb, Dennis; Glickenhouse, Nickolas; Buletti, Heather

    2009-12-01

    We present a data model for the initial implementation of MetPetDB, a geochemical database specific to metamorphic rock samples. The database is designed around the concept of preservation of spatial relationships, at all scales, of chemical analyses and their textural setting. Objects in the database (samples) represent physical rock samples; each sample may contain one or more subsamples with associated geochemical and image data. Samples, subsamples, geochemical data, and images are described with attributes (some required, some optional); these attributes also serve as search delimiters. All data in the database are classified as published (i.e., archived or published data), public or private. Public and published data may be freely searched and downloaded. All private data is owned; permission to view, edit, download and otherwise manipulate private data may be granted only by the data owner; all such editing operations are recorded by the database to create a data version log. The sharing of data permissions among a group of collaborators researching a common sample is done by the sample owner through the project manager. User interaction with MetPetDB is hosted by a web-based platform based upon the Java servlet application programming interface, with the PostgreSQL relational database. The database web portal includes modules that allow the user to interact with the database: registered users may save and download public and published data, upload private data, create projects, and assign permission levels to project collaborators. An Image Viewer module provides for spatial integration of image and geochemical data. A toolkit consisting of plotting and geochemical calculation software for data analysis and a mobile application for viewing the public and published data is being developed. Future issues to address include population of the database, integration with other geochemical databases, development of the analysis toolkit, creation of data models for derivative data, and building a community-wide user base. It is believed that this and other geochemical databases will enable more productive collaborations, generate more efficient research efforts, and foster new developments in basic research in the field of solid earth geochemistry.

  8. Cross-Matching Source Observations from the Palomar Transient Factory (PTF)

    NASA Astrophysics Data System (ADS)

    Laher, Russ; Grillmair, C.; Surace, J.; Monkewitz, S.; Jackson, E.

    2009-01-01

    Over the four-year lifetime of the PTF project, approximately 40 billion instances of astronomical-source observations will be extracted from the image data. The instances will correspond to the same astronomical objects being observed at roughly 25-50 different times, and so a very large catalog containing important object-variability information will be the chief PTF product. Organizing astronomical-source catalogs is conventionally done by dividing the catalog into declination zones and sorting by right ascension within each zone (e.g., the USNOA star catalog), in order to facilitate catalog searches. This method was reincarnated as the "zones" algorithm in a SQL-Server database implementation (Szalay et al., MSR-TR-2004-32), with corrections given by Gray et al. (MSR-TR-2006-52). The primary advantage of this implementation is that all of the work is done entirely on the database server and client/server communication is eliminated. We implemented the methods outlined in Gray et al. for a PostgreSQL database. We programmed the methods as database functions in PL/pgSQL procedural language. The cross-matching is currently based on source positions, but we intend to extend it to use both positions and positional uncertainties to form a chi-square statistic for optimal thresholding. The database design includes three main tables, plus a handful of internal tables. The Sources table stores the SExtractor source extractions taken at various times; the MergedSources table stores statistics about the astronomical objects, which are the result of cross-matching records in the Sources table; and the Merges table, which associates cross-matched primary keys in the Sources table with primary keys in the MergedSoures table. Besides judicious database indexing, we have also internally partitioned the Sources table by declination zone, in order to speed up the population of Sources records and make the database more manageable. The catalog will be accessible to the public after the proprietary period through IRSA (irsa.ipac.caltech.edu).

  9. [Gender-based differences in the management of low back pain].

    PubMed

    Sagy, Iftach; Friger, Michael; Sagy, Tal Peleg; Rudich, Zvia

    2014-07-01

    Low back pain (LBP) is a well-known reason people worldwide seek medical help and it is a Leading cause of chronic pain and disability among people of working age. Recent research reveals that the female gender is not only a risk factor for developing LBP but it may also influence the management of this common condition. Our objective was to evaluate gender-related differences in the management of LBP patients in a specialized hospital-based chronic pain unit. A cross-sectional survey was carried out through telephone interviews and the hospital computerized database (N = 129). Socio-demographic, Lifestyle, occupational and medical variables were collected, and their association with the frequency of use of five different diagnostic and/or therapeutic modalities was examined using gender stratification. After adjustment for age, religion, socioeconomic data and the number of co-morbid conditions, women were more prone to poly-pharmacy of analgesic medications prescribed in the previous year compared to men (p = 0.024) and exhibited an increased rate of treatment cessations due to adverse effects (p < 0.001). Interestingly, while women tended to utilize more healthcare services besides the pain clinic (p = 0.097), men tended on average to have more visits than women to the pain clinic for their complaints (p = 0.019). Among those who applied for insurance compensation for LBP-related disability, women exhibited increased use of imaging procedures compared to men (p = 0.038). This cross-sectional study reveals gender-related differences in management and health services utilization for treatment of LBP in the chronic pain clinic. If confirmed in other centers, these findings should inspire gender-sensitive resource management of the treatment of chronic pain patients. Moreover, the findings suggest that increased awareness of gender bias when seeking insurance compensation for LBP-related disability is warranted.

  10. Representing sentence information

    NASA Astrophysics Data System (ADS)

    Perkins, Walton A., III

    1991-03-01

    This paper describes a computer-oriented representation for sentence information. Whereas many Artificial Intelligence (AI) natural language systems start with a syntactic parse of a sentence into the linguist's components: noun, verb, adjective, preposition, etc., we argue that it is better to parse the input sentence into 'meaning' components: attribute, attribute value, object class, object instance, and relation. AI systems need a representation that will allow rapid storage and retrieval of information and convenient reasoning with that information. The attribute-of-object representation has proven useful for handling information in relational databases (which are well known for their efficiency in storage and retrieval) and for reasoning in knowledge- based systems. On the other hand, the linguist's syntactic representation of the works in sentences has not been shown to be useful for information handling and reasoning. We think it is an unnecessary and misleading intermediate form. Our sentence representation is semantic based in terms of attribute, attribute value, object class, object instance, and relation. Every sentence is segmented into one or more components with the form: 'attribute' of 'object' 'relation' 'attribute value'. Using only one format for all information gives the system simplicity and good performance as a RISC architecture does for hardware. The attribute-of-object representation is not new; it is used extensively in relational databases and knowledge-based systems. However, we will show that it can be used as a meaning representation for natural language sentences with minor extensions. In this paper we describe how a computer system can parse English sentences into this representation and generate English sentences from this representation. Much of this has been tested with computer implementation.

  11. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador

    PubMed Central

    Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. Discussion A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. Conclusion The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research. PMID:22859644

  12. ClassLess: A Comprehensive Database of Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Hillenbrand, Lynne; Baliber, Nairn

    2015-01-01

    We have designed and constructed a database housing published measurements of Young Stellar Objects (YSOs) within ~1 kpc of the Sun. ClassLess, so called because it includes YSOs in all stages of evolution, is a relational database in which user interaction is conducted via HTML web browsers, queries are performed in scientific language, and all data are linked to the sources of publication. Each star is associated with a cluster (or clusters), and both spatially resolved and unresolved measurements are stored, allowing proper use of data from multiple star systems. With this fully searchable tool, myriad ground- and space-based instruments and surveys across wavelength regimes can be exploited. In addition to primary measurements, the database self consistently calculates and serves higher level data products such as extinction, luminosity, and mass. As a result, searches for young stars with specific physical characteristics can be completed with just a few mouse clicks.

  13. Breach Risk Magnitude: A Quantitative Measure of Database Security.

    PubMed

    Yasnoff, William A

    2016-01-01

    A quantitative methodology is described that provides objective evaluation of the potential for health record system breaches. It assumes that breach risk increases with the number of potential records that could be exposed, while it decreases when more authentication steps are required for access. The breach risk magnitude (BRM) is the maximum value for any system user of the common logarithm of the number of accessible database records divided by the number of authentication steps needed to achieve such access. For a one million record relational database, the BRM varies from 5.52 to 6 depending on authentication protocols. For an alternative data architecture designed specifically to increase security by separately storing and encrypting each patient record, the BRM ranges from 1.3 to 2.6. While the BRM only provides a limited quantitative assessment of breach risk, it may be useful to objectively evaluate the security implications of alternative database organization approaches.

  14. Building a genome database using an object-oriented approach.

    PubMed

    Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud

    2002-01-01

    GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.

  15. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  16. Toxoplasmosis in Iran: A guide for general physicians working in the Iranian health network setting: A systematic review.

    PubMed

    Alavi, Seyed Mohammad; Alavi, Leila

    2016-01-01

    Human toxoplasmosis is an important zoonotic infection worldwide which is caused by the intracellular parasite Toxoplasma gondii (T.gondii). The aim of this study was to review briefly the general aspects of toxoplasma infection in in Iranian health system network. We searched published toxoplasmosis related articles in English databases including Science Direct, Pub Med, Scopus, Google Scholar, Magiran, Iran Medex, Iran Doc and Scientific Information Database (SID) for toxoplasmosis. Out of 1267 articles from the English and Persian databases search, 40 articles were suitable with our research objectives and so were selected for the study. It is estimated that at least a third of the world human population is infected with T.gondii, suggesting it as one of the most common parasitic infections through the world. Maternal infection during pregnancy may affect dangerous outcome for the fetus, or even cause intrauterine death. Reactivation of a previous infection in immunocompromised patient such as drug induced, AIDS and organ transplantation can cause life-threating central nervous system infection. Ocular toxoplasmosis is one of the most important causes of blindness, especially in individuals with a deficient immune system. According to the increasing burden of toxoplasmosis on human health, the findings of this study highlight the appropriate preventive measures, diagnosis, and management of this disease.

  17. The European Southern Observatory-MIDAS table file system

    NASA Technical Reports Server (NTRS)

    Peron, M.; Grosbol, P.

    1992-01-01

    The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.

  18. Investigating multi-objective fluence and beam orientation IMRT optimization

    NASA Astrophysics Data System (ADS)

    Potrebko, Peter S.; Fiege, Jason; Biagioli, Matthew; Poleszczuk, Jan

    2017-07-01

    Radiation Oncology treatment planning requires compromises to be made between clinical objectives that are invariably in conflict. It would be beneficial to have a ‘bird’s-eye-view’ perspective of the full spectrum of treatment plans that represent the possible trade-offs between delivering the intended dose to the planning target volume (PTV) while optimally sparing the organs-at-risk (OARs). In this work, the authors demonstrate Pareto-aware radiotherapy evolutionary treatment optimization (PARETO), a multi-objective tool featuring such bird’s-eye-view functionality, which optimizes fluence patterns and beam angles for intensity-modulated radiation therapy (IMRT) treatment planning. The problem of IMRT treatment plan optimization is managed as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. To achieve this, PARETO is built around a powerful multi-objective evolutionary algorithm, called Ferret, which simultaneously optimizes multiple fitness functions that encode the attributes of the desired dose distribution for the PTV and OARs. The graphical interfaces within PARETO provide useful information such as: the convergence behavior during optimization, trade-off plots between the competing objectives, and a graphical representation of the optimal solution database allowing for the rapid exploration of treatment plan quality through the evaluation of dose-volume histograms and isodose distributions. PARETO was evaluated for two relatively complex clinical cases, a paranasal sinus and a pancreas case. The end result of each PARETO run was a database of optimal (non-dominated) treatment plans that demonstrated trade-offs between the OAR and PTV fitness functions, which were all equally good in the Pareto-optimal sense (where no one objective can be improved without worsening at least one other). Ferret was able to produce high quality solutions even though a large number of parameters, such as beam fluence and beam angles, were included in the optimization.

  19. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency

    PubMed Central

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254

  20. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    PubMed

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  1. Search extension transforms Wiki into a relational system: A case for flavonoid metabolite database

    PubMed Central

    Arita, Masanori; Suwa, Kazuhiro

    2008-01-01

    Background In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. Results To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. Conclusion This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated. PMID:18822113

  2. A phenome database (NEAUHLFPD) designed and constructed for broiler lines divergently selected for abdominal fat content.

    PubMed

    Li, Min; Dong, Xiang-yu; Liang, Hao; Leng, Li; Zhang, Hui; Wang, Shou-zhi; Li, Hui; Du, Zhi-Qiang

    2017-05-20

    Effective management and analysis of precisely recorded phenotypic traits are important components of the selection and breeding of superior livestocks. Over two decades, we divergently selected chicken lines for abdominal fat content at Northeast Agricultural University (Northeast Agricultural University High and Low Fat, NEAUHLF), and collected large volume of phenotypic data related to the investigation on molecular genetic basis of adipose tissue deposition in broilers. To effectively and systematically store, manage and analyze phenotypic data, we built the NEAUHLF Phenome Database (NEAUHLFPD). NEAUHLFPD included the following phenotypic records: pedigree (generations 1-19) and 29 phenotypes, such as body sizes and weights, carcass traits and their corresponding rates. The design and construction strategy of NEAUHLFPD were executed as follows: (1) Framework design. We used Apache as our web server, MySQL and Navicat as database management tools, and PHP as the HTML-embedded language to create dynamic interactive website. (2) Structural components. On the main interface, detailed introduction on the composition, function, and the index buttons of the basic structure of the database could be found. The functional modules of NEAUHLFPD had two main components: the first module referred to the physical storage space for phenotypic data, in which functional manipulation on data can be realized, such as data indexing, filtering, range-setting, searching, etc.; the second module related to the calculation of basic descriptive statistics, where data filtered from the database can be used for the computation of basic statistical parameters and the simultaneous conditional sorting. NEAUHLFPD could be used to effectively store and manage not only phenotypic, but also genotypic and genomics data, which can facilitate further investigation on the molecular genetic basis of chicken adipose tissue growth and development, and expedite the selection and breeding of broilers with low fat content.

  3. [Review of digital ground object spectral library].

    PubMed

    Zhou, Xiao-Hu; Zhou, Ding-Wu

    2009-06-01

    A higher spectral resolution is the main direction of developing remote sensing technology, and it is quite important to set up the digital ground object reflectance spectral database library, one of fundamental research fields in remote sensing application. Remote sensing application has been increasingly relying on ground object spectral characteristics, and quantitative analysis has been developed to a new stage. The present article summarized and systematically introduced the research status quo and development trend of digital ground object reflectance spectral libraries at home and in the world in recent years. Introducing the spectral libraries has been established, including desertification spectral database library, plants spectral database library, geological spectral database library, soil spectral database library, minerals spectral database library, cloud spectral database library, snow spectral database library, the atmosphere spectral database library, rocks spectral database library, water spectral database library, meteorites spectral database library, moon rock spectral database library, and man-made materials spectral database library, mixture spectral database library, volatile compounds spectral database library, and liquids spectral database library. In the process of establishing spectral database libraries, there have been some problems, such as the lack of uniform national spectral database standard and uniform standards for the ground object features as well as the comparability between different databases. In addition, data sharing mechanism can not be carried out, etc. This article also put forward some suggestions on those problems.

  4. CALINVASIVES: a revolutionary tool to monitor invasive threats

    Treesearch

    M. Garbelotto; S. Drill; C. Powell; J. Malpas

    2017-01-01

    CALinvasives is a web-based relational database and content management system (CMS) cataloging the statewide distribution of invasive pathogens and pests and the plant hosts they impact. The database has been developed as a collaboration between the Forest Pathology and Mycology Laboratory at UC Berkeley and Calflora. CALinvasives will combine information on the...

  5. A Codasyl-Type Schema for Natural Language Medical Records

    PubMed Central

    Sager, N.; Tick, L.; Story, G.; Hirschman, L.

    1980-01-01

    This paper describes a CODASYL (network) database schema for information derived from narrative clinical reports. The goal of this work is to create an automated process that accepts natural language documents as input and maps this information into a database of a type managed by existing database management systems. The schema described here represents the medical events and facts identified through the natural language processing. This processing decomposes each narrative into a set of elementary assertions, represented as MEDFACT records in the database. Each assertion in turn consists of a subject and a predicate classed according to a limited number of medical event types, e.g., signs/symptoms, laboratory tests, etc. The subject and predicate are represented by EVENT records which are owned by the MEDFACT record associated with the assertion. The CODASYL-type network structure was found to be suitable for expressing most of the relations needed to represent the natural language information. However, special mechanisms were developed for storing the time relations between EVENT records and for recording connections (such as causality) between certain MEDFACT records. This schema has been implemented using the UNIVAC DMS-1100 DBMS.

  6. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  7. Information Resources Management. Nordic Conference on Information and Documentation (6th, Helsinki, Finland, August 19-22, 1985).

    ERIC Educational Resources Information Center

    Samfundet for Informationstjanst i Finland, Helsinki.

    The 54 conference papers compiled in this proceedings include plenary addresses; reviews of Nordic databases; and discussions of documents, systems, services, and products as they relate to information resources management (IRM). Almost half of the presentations are in English: (1) "What Is Information Resources Management?" (Forest…

  8. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    NASA Astrophysics Data System (ADS)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text: the whole document gets parsed to extract the words which are more meaningful for the main argument of the document, and applies the extraction in the form of N-grams (mono-grams, bi-grams, tri-grams). • MAPS database - This module is a simple database which contains all the N-grams used by MAPS (physical parameters from SeaDataNet vocabularies) to define our marine "ontology". • Relation identifier - This module performs the most important task of identifying relationships between the N-gram extracted from the text by the parser and the provided oceanographic terminology. It checks N-grams supplied by the Syntactic parser and then matches them with the terms stored in the MAPS database. Found matches are returned back to the parser with flexed form appearing in the source text. • A "relaxed" extractor - This option can be activated when the search engine is launched. It was introduced to give the user a chance to create new N-grams combining existing mono-grams and bi-grams in the database with rich-words found within the source text. The innovation of a semantic engine lies in the fact that the process is not just about the retrieval of already known documents by means of a simple term query but rather the retrieval of a population of documents whose existence was unknown. The system answers by showing a screenshot of results ordered according to the following criteria: • Relevance - of the document with respect to the concept that is searched • Date - of publication of the paper • Source - data provider as defined in the SeaDataNet Common Data Index • Matrix - environmental matrices as defined in the oceanographic field • Geographic area - area specified in the text • Clustering - the process of organizing objects into groups whose members are similar The clustering returns as the output the related documents. For each document the MAPS visualization provides: • Title, author, source/provider of data, web address • Tagging of key terms or concepts • Summary of the document • Visualization of the whole document The possibility of inserting the number of citations for each document among the criteria of the advanced search is currently undergoing; in this case the engine should be able to connect to any of the existing bibliographic citation systems (such as Google Scholar, Scopus, etc.).

  9. SAGEMAP: A web-based spatial dataset for sage grouse and sagebrush steppe management in the Intermountain West

    USGS Publications Warehouse

    Knick, Steven T.; Schueck, Linda

    2002-01-01

    The Snake River Field Station of the Forest and Rangeland Ecosystem Science Center has developed and now maintains a database of the spatial information needed to address management of sage grouse and sagebrush steppe habitats in the western United States. The SAGEMAP project identifies and collects infor-mation for the region encompassing the historical extent of sage grouse distribution. State and federal agencies, the primary entities responsible for managing sage grouse and their habitats, need the information to develop an objective assessment of the current status of sage grouse populations and their habitats, or to provide responses and recommendations for recovery if sage grouse are listed as a Threatened or Endangered Species. The spatial data on the SAGEMAP website (http://SAGEMAP.wr.usgs.gov) are an important component in documenting current habitat and other environmental conditions. In addition, the data can be used to identify areas that have undergone significant changes in land cover and to determine underlying causes. As such, the database permits an analysis for large-scale and range-wide factors that may be causing declines of sage grouse populations. The spatial data contained on this site also will be a critical component guiding the decision processes for restoration of habitats in the Great Basin. Therefore, development of this database and the capability to disseminate the information carries multiple benefits for land and wildlife management.

  10. Rdesign: A data dictionary with relational database design capabilities in Ada

    NASA Technical Reports Server (NTRS)

    Lekkos, Anthony A.; Kwok, Teresa Ting-Yin

    1986-01-01

    Data Dictionary is defined to be the set of all data attributes, which describe data objects in terms of their intrinsic attributes, such as name, type, size, format and definition. It is recognized as the data base for the Information Resource Management, to facilitate understanding and communication about the relationship between systems applications and systems data usage and to help assist in achieving data independence by permitting systems applications to access data knowledge of the location or storage characteristics of the data in the system. A research and development effort to use Ada has produced a data dictionary with data base design capabilities. This project supports data specification and analysis and offers a choice of the relational, network, and hierarchical model for logical data based design. It provides a highly integrated set of analysis and design transformation tools which range from templates for data element definition, spreadsheet for defining functional dependencies, normalization, to logical design generator.

  11. Effectiveness of Nursing Management Information Systems: A Systematic Review

    PubMed Central

    Choi, Mona; Yang, You Lee

    2014-01-01

    Objectives The purpose of this study was to review evaluation studies of nursing management information systems (NMISs) and their outcome measures to examine system effectiveness. Methods For the systematic review, a literature search of the PubMed, CINAHL, Embase, and Cochrane Library databases was conducted to retrieve original articles published between 1970 and 2014. Medical Subject Headings (MeSH) terms included informatics, medical informatics, nursing informatics, medical informatics application, and management information systems for information systems and evaluation studies and nursing evaluation research for evaluation research. Additionally, manag* and admin*, and nurs* were combined. Title, abstract, and full-text reviews were completed by two reviewers. And then, year, author, type of management system, study purpose, study design, data source, system users, study subjects, and outcomes were extracted from the selected articles. The quality and risk of bias of the studies that were finally selected were assessed with the Risk of Bias Assessment Tool for Non-randomized Studies (RoBANS) criteria. Results Out of the 2,257 retrieved articles, a total of six articles were selected. These included two scheduling programs, two nursing cost-related programs, and two patient care management programs. For the outcome measurements, usefulness, time saving, satisfaction, cost, attitude, usability, data quality/completeness/accuracy, and personnel work patterns were included. User satisfaction, time saving, and usefulness mostly showed positive findings. Conclusions The study results suggest that NMISs were effective in time saving and useful in nursing care. Because there was a lack of quality in the reviewed studies, well-designed research, such as randomized controlled trials, should be conducted to more objectively evaluate the effectiveness of NMISs. PMID:25405060

  12. Producing approximate answers to database queries

    NASA Technical Reports Server (NTRS)

    Vrbsky, Susan V.; Liu, Jane W. S.

    1993-01-01

    We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

  13. Creating an Effective Network: The GRACEnet Example

    NASA Astrophysics Data System (ADS)

    Follett, R. F.; Del Grosso, S.

    2008-12-01

    Networking activities require time, work, and nurturing. The objective of this presentation is to share the experience gained from The Greenhouse gas Reduction through Agricultural Carbon Enhancement network (GRACEnet). GRACEnet, formally established in 2005 by the ARS/USDA, resulted from workshops, teleconferences, and other activities beginning in at least 2002. Critical factors for its formation were to develop and formalize a common vision, goals, and objectives, which was accomplished in a 2005 workshop. The 4-person steering committee (now 5) was charged with coordinating the part-time (0.05- to 0.5 SY/location) efforts across 30 ARS locations to develop four products; (1) a national database, (2) regional/national guidelines of management practices, (3) computer models, and (4) "state-of-knowledge" summary publications. All locations are asked to contribute to the database from their field studies. Communication with everyone and periodic meeting are extremely important. Required to populate the database has to be a common vision of sharing, format, and trust. Based upon the e-mail list, GRACEnet has expanded from about 30 to now nearly 70 participants. Annual reports and a new website help facilitate this activity.

  14. Research on image evidence in land supervision and GIS management

    NASA Astrophysics Data System (ADS)

    Li, Qiu; Wu, Lixin

    2006-10-01

    Land resource development and utilization brings many problems. The numbers, the scale and volume of illegal land use cases are on the increasing. Since the territory is vast, and the land violations are concealment, it is difficulty for an effective land supervision and management. In this paper, the concepts of evidence, and preservation of evidence were described first. The concepts of image evidence (IE), natural evidence (NE), natural preservation of evidence (NPE), general preservation of evidence (GPE) were proposed based on the characteristics of remote sensing image (RSI) which has a characteristic of objectiveness, truthfulness, high spatial resolution, more information included. Using MapObjects and Visual Basic 6.0, under the Access management to implement the conjunction of spatial vector database and attribute data table; taking RSI as the data sources and background layer; combining the powerful management of geographic information system (GIS) for spatial data, and visual analysis, a land supervision and GIS management system was design and implemented based on NPE. The practical use in Beijing shows that the system is running well, and solved some problems in land supervision and management.

  15. An Interdisciplinary Conservation Module for Condition Survey on Cultural Heritages with a 3d Information System

    NASA Astrophysics Data System (ADS)

    Pedelì, C.

    2013-07-01

    In order to make the most of the digital outsourced documents, based on new technologies (e.g.: 3D LASER scanners, photogrammetry, etc.), a new approach was followed and a new ad hoc information system was implemented. The obtained product allow to the final user to reuse and manage the digital documents providing graphic tools and an integrated specific database to manage the entire documentation and conservation process, starting from the condition assessment until the conservation / restoration work. The system is organised on two main modules: Archaeology and Conservation. This paper focus on the features and the advantages of the second one. In particular it is emphasized its logical organisation, the possibility to easily mapping by using a very precise 3D metric platform, to benefit of the integrated relational database which allows to well organise, compare, keep and manage different kind of information at different level. Conservation module can manage along the time the conservation process of a site, monuments, object or excavation and conservation work in progress. An alternative approach called OVO by the author of this paper, force the surveyor to observe and describe the entity decomposing it on functional components, materials and construction techniques. Some integrated tools as the "ICOMOS-ISCS Illustrated glossary … " help the user to describe pathologies with a unified approach and terminology. Also the conservation project phase is strongly supported to envision future intervention and cost. A final section is devoted to record the conservation/restoration work already done or in progress. All information areas of the conservation module are interconnected to each other to allows to the system a complete interchange of graphic and alphanumeric data. The conservation module it self is connected to the archaeological one to create an interdisciplinary daily tool.

  16. Antipsychotic-induced hyperprolactinemia: synthesis of world-wide guidelines and integrated recommendations for assessment, management and future research.

    PubMed

    Grigg, Jasmin; Worsley, Roisin; Thew, Caroline; Gurvich, Caroline; Thomas, Natalie; Kulkarni, Jayashri

    2017-11-01

    Hyperprolactinemia is a highly prevalent adverse effect of many antipsychotic agents, with potentially serious health consequences. Several guidelines have been developed for the management of this condition; yet, their concordance has not been evaluated. The objectives of this paper were (1) to review current clinical guidelines; (2) to review key systematic evidence for management; and (3) based on our findings, to develop an integrated management recommendation specific to male and female patients who are otherwise clinically stabilised on antipsychotics. We performed searches of Medline and EMBASE, supplemented with guideline-specific database and general web searches, to identify clinical guidelines containing specific recommendations for antipsychotic-induced hyperprolactinemia, produced/updated 01/01/2010-15/09/2016. A separate systematic search was performed to identify emerging management approaches described in reviews and meta-analyses published ≥ 2010. There is some consensus among guidelines relating to baseline PRL screening (8/12 guidelines), screening for differential diagnosis (7/12) and discontinuing/switching PRL-raising agent (7/12). Guidelines otherwise diverge substantially regarding most aspects of screening, monitoring and management (e.g. treatment with dopamine agonists). There is an omission of clear sex-specific recommendations. Systematic literature on management approaches is promising; more research is needed. An integrated management recommendation is presented to guide sex-specific clinical response to antipsychotic-induced hyperprolactinemia. Key aspects include asymptomatic hyperprolactinemia monitoring and fertility considerations with PRL normalisation. Further empirical work is key to shaping robust guidelines for antipsychotic-induced hyperprolactinemia. The integrated management recommendation can assist clinician and patient decision-making, with the goal of balancing effective psychiatric treatment while minimising PRL-related adverse health effects in male and female patients.

  17. Negative Effects of Learning Spreadsheet Management on Learning Database Management

    ERIC Educational Resources Information Center

    Vágner, Anikó; Zsakó, László

    2015-01-01

    A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…

  18. Toward unification of taxonomy databases in a distributed computer environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less

  19. Research on spatio-temporal database techniques for spatial information service

    NASA Astrophysics Data System (ADS)

    Zhao, Rong; Wang, Liang; Li, Yuxiang; Fan, Rongshuang; Liu, Ping; Li, Qingyuan

    2007-06-01

    Geographic data should be described by spatial, temporal and attribute components, but the spatio-temporal queries are difficult to be answered within current GIS. This paper describes research into the development and application of spatio-temporal data management system based upon GeoWindows GIS software platform which was developed by Chinese Academy of Surveying and Mapping (CASM). Faced the current and practical requirements of spatial information application, and based on existing GIS platform, one kind of spatio-temporal data model which integrates vector and grid data together was established firstly. Secondly, we solved out the key technique of building temporal data topology, successfully developed a suit of spatio-temporal database management system adopting object-oriented methods. The system provides the temporal data collection, data storage, data management and data display and query functions. Finally, as a case study, we explored the application of spatio-temporal data management system with the administrative region data of multi-history periods of China as the basic data. With all the efforts above, the GIS capacity of management and manipulation in aspect of time and attribute of GIS has been enhanced, and technical reference has been provided for the further development of temporal geographic information system (TGIS).

  20. Solutions for medical databases optimal exploitation

    PubMed Central

    Branescu, I; Purcarea, VL; Dobrescu, R

    2014-01-01

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769

  1. Overview of EVE - the event visualization environment of ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, Matevž

    2010-04-01

    EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.

  2. Microvax-based data management and reduction system for the regional planetary image facilities

    NASA Technical Reports Server (NTRS)

    Arvidson, R.; Guinness, E.; Slavney, S.; Weiss, B.

    1987-01-01

    Presented is a progress report for the Regional Planetary Image Facilities (RPIF) prototype image data management and reduction system being jointly implemented by Washington University and the USGS, Flagstaff. The system will consist of a MicroVAX with a high capacity (approx 300 megabyte) disk drive, a compact disk player, an image display buffer, a videodisk player, USGS image processing software, and SYSTEM 1032 - a commercial relational database management package. The USGS, Flagstaff, will transfer their image processing software including radiometric and geometric calibration routines, to the MicroVAX environment. Washington University will have primary responsibility for developing the database management aspects of the system and for integrating the various aspects into a working system.

  3. The Spanish National Reference Database for Ionizing Radiations (BANDRRI)

    PubMed

    Los Arcos JM; Bailador; Gonzalez; Gonzalez; Gorostiza; Ortiz; Sanchez; Shaw; Williart

    2000-03-01

    The Spanish National Reference Database for Ionizing Radiations (BANDRRI) is being implemented by a reasearch team in the frame of a joint project between CIEMAT (Unidad de Metrologia de Radiaciones Ionizantes and Direccion de Informatica) and the Universidad Nacional de Educacion a Distancia (UNED, Departamento de Mecanica y Departamento de Fisica de Materiales). This paper presents the main objectives of BANDRRI, its dynamic and relational data base structure, interactive Web accessibility and its main radionuclide-related contents at this moment.

  4. Literature Review and Database of Relations Between Salinity and Aquatic Biota: Applications to Bowdoin National Wildlife Refuge, Montana

    USGS Publications Warehouse

    Gleason, Robert A.; Tangen, Brian A.; Laubhan, Murray K.; Finocchiaro, Raymond G.; Stamm, John F.

    2009-01-01

    Long-term accumulation of salts in wetlands at Bowdoin National Wildlife Refuge (NWR), Mont., has raised concern among wetland managers that increasing salinity may threaten plant and invertebrate communities that provide important habitat and food resources for migratory waterfowl. Currently, the U.S. Fish and Wildlife Service (USFWS) is evaluating various water management strategies to help maintain suitable ranges of salinity to sustain plant and invertebrate resources of importance to wildlife. To support this evaluation, the USFWS requested that the U.S. Geological Survey (USGS) provide information on salinity ranges of water and soil for common plants and invertebrates on Bowdoin NWR lands. To address this need, we conducted a search of the literature on occurrences of plants and invertebrates in relation to salinity and pH of the water and soil. The compiled literature was used to (1) provide a general overview of salinity concepts, (2) document published tolerances and adaptations of biota to salinity, (3) develop databases that the USFWS can use to summarize the range of reported salinity values associated with plant and invertebrate taxa, and (4) perform database summaries that describe reported salinity ranges associated with plants and invertebrates at Bowdoin NWR. The purpose of this report is to synthesize information to facilitate a better understanding of the ecological relations between salinity and flora and fauna when developing wetland management strategies. A primary focus of this report is to provide information to help evaluate and address salinity issues at Bowdoin NWR; however, the accompanying databases, as well as concepts and information discussed, are applicable to other areas or refuges. The accompanying databases include salinity values reported for 411 plant taxa and 330 invertebrate taxa. The databases are available in Microsoft Excel version 2007 (http://pubs.usgs.gov/sir/2009/5098/downloads/databases_21april2009.xls) and contain 27 data fields that include variables such as taxonomic identification, values for salinity and pH, wetland classification, location of study, and source of data. The databases are not exhaustive of the literature and are biased toward wetland habitats located in the glaciated North-Central United States; however, the databases do encompass a diversity of biota commonly found in brackish and freshwater inland wetland habitats.

  5. 'The surface management system' (SuMS) database: a surface-based database to aid cortical surface reconstruction, visualization and analysis

    NASA Technical Reports Server (NTRS)

    Dickson, J.; Drury, H.; Van Essen, D. C.

    2001-01-01

    Surface reconstructions of the cerebral cortex are increasingly widely used in the analysis and visualization of cortical structure, function and connectivity. From a neuroinformatics perspective, dealing with surface-related data poses a number of challenges. These include the multiplicity of configurations in which surfaces are routinely viewed (e.g. inflated maps, spheres and flat maps), plus the diversity of experimental data that can be represented on any given surface. To address these challenges, we have developed a surface management system (SuMS) that allows automated storage and retrieval of complex surface-related datasets. SuMS provides a systematic framework for the classification, storage and retrieval of many types of surface-related data and associated volume data. Within this classification framework, it serves as a version-control system capable of handling large numbers of surface and volume datasets. With built-in database management system support, SuMS provides rapid search and retrieval capabilities across all the datasets, while also incorporating multiple security levels to regulate access. SuMS is implemented in Java and can be accessed via a Web interface (WebSuMS) or using downloaded client software. Thus, SuMS is well positioned to act as a multiplatform, multi-user 'surface request broker' for the neuroscience community.

  6. Scale out databases for CERN use cases

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Lanza Garcia, Daniel; Surdy, Kacper

    2015-12-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database.

  7. An introduction to the new Productivity Information Management System (PIMS)

    NASA Technical Reports Server (NTRS)

    Hull, R.

    1982-01-01

    The productivity information management system (PIMS), is described. The main objective of this computerized system is to enable management scientists to interactively explore data concerning DSN operations, maintenance and repairs, to develop and verify models for management planning. The PIMS will provide a powerful set of tools for iteratively manipulating data sets in a wide variety of ways. The initial version of PIMS will be a small scale pilot system. The following topics are discussed: (1) the motivation for developing PIMS; (2) various data sets which will be integrated by PIMS; (3) overall design of PIMS; and (4) how PIMS will be used. A survey of relevant databases concerning DSN operations at Goldstone is also included.

  8. Drug residues in urban water: A database for ecotoxicological risk management.

    PubMed

    Destrieux, Doriane; Laurent, François; Budzinski, Hélène; Pedelucq, Julie; Vervier, Philippe; Gerino, Magali

    2017-12-31

    Human-use drug residues (DR) are only partially eliminated by waste water treatment plants (WWTPs), so that residual amounts can reach natural waters and cause environmental hazards. In order to properly manage these hazards in the aquatic environment, a database is made available that integrates the concentration ranges for DR, which cause adverse effects for aquatic organisms, and the temporal variations of the ecotoxicological risks. To implement this database for the ecotoxicological risk assessment (ERA database), the required information for each DR is the predicted no effect concentrations (PNECs), along with the predicted environmental concentrations (PECs). The risk assessment is based on the ratio between the PNECs and the PECs. Adverse effect data or PNECs have been found in the publicly available literature for 45 substances. These ecotoxicity test data have been extracted from 125 different sources. This ERA database contains 1157 adverse effect data and 287 PNECs. The efficiency of this ERA database was tested with a data set coming from a simultaneous survey of WWTPs and the natural environment. In this data set, 26 DR were searched for in two WWTPs and in the river. On five sampling dates, concentrations measured in the river for 10 DR could pose environmental problems of which 7 were measured only downstream of WWTP outlets. From scientific literature and measurements, data implementation with unit homogenisation in a single database facilitates the actual ecotoxicological risk assessment, and may be useful for further risk coming from data arising from the future field survey. Moreover, the accumulation of a large ecotoxicity data set in a single database should not only improve knowledge of higher risk molecules but also supply an objective tool to help the rapid and efficient evaluation of the risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. epiPATH: an information system for the storage and management of molecular epidemiology data from infectious pathogens.

    PubMed

    Amadoz, Alicia; González-Candelas, Fernando

    2007-04-20

    Most research scientists working in the fields of molecular epidemiology, population and evolutionary genetics are confronted with the management of large volumes of data. Moreover, the data used in studies of infectious diseases are complex and usually derive from different institutions such as hospitals or laboratories. Since no public database scheme incorporating clinical and epidemiological information about patients and molecular information about pathogens is currently available, we have developed an information system, composed by a main database and a web-based interface, which integrates both types of data and satisfies requirements of good organization, simple accessibility, data security and multi-user support. From the moment a patient arrives to a hospital or health centre until the processing and analysis of molecular sequences obtained from infectious pathogens in the laboratory, lots of information is collected from different sources. We have divided the most relevant data into 12 conceptual modules around which we have organized the database schema. Our schema is very complete and it covers many aspects of sample sources, samples, laboratory processes, molecular sequences, phylogenetics results, clinical tests and results, clinical information, treatments, pathogens, transmissions, outbreaks and bibliographic information. Communication between end-users and the selected Relational Database Management System (RDMS) is carried out by default through a command-line window or through a user-friendly, web-based interface which provides access and management tools for the data. epiPATH is an information system for managing clinical and molecular information from infectious diseases. It facilitates daily work related to infectious pathogens and sequences obtained from them. This software is intended for local installation in order to safeguard private data and provides advanced SQL-users the flexibility to adapt it to their needs. The database schema, tool scripts and web-based interface are free software but data stored in our database server are not publicly available. epiPATH is distributed under the terms of GNU General Public License. More details about epiPATH can be found at http://genevo.uv.es/epipath.

  10. The USA-NPN Information Management System: A tool in support of phenological assessments

    NASA Astrophysics Data System (ADS)

    Rosemartin, A.; Vazquez, R.; Wilson, B. E.; Denny, E. G.

    2009-12-01

    The USA National Phenology Network (USA-NPN) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. Data management and information sharing are central to the USA-NPN mission. The USA-NPN develops, implements, and maintains a comprehensive Information Management System (IMS) to serve the needs of the network, including the collection, storage and dissemination of phenology data, access to phenology-related information, tools for data interpretation, and communication among partners of the USA-NPN. The IMS includes components for data storage, such as the National Phenology Database (NPD), and several online user interfaces to accommodate data entry, data download, data visualization and catalog searches for phenology-related information. The IMS is governed by a set of standards to ensure security, privacy, data access, and data quality. The National Phenology Database is designed to efficiently accommodate large quantities of phenology data, to be flexible to the changing needs of the network, and to provide for quality control. The database stores phenology data from multiple sources (e.g., partner organizations, researchers and citizen observers), and provides for integration with legacy datasets. Several services will be created to provide access to the data, including reports, visualization interfaces, and web services. These services will provide integrated access to phenology and related information for scientists, decision-makers and general audiences. Phenological assessments at any scale will rely on secure and flexible information management systems for the organization and analysis of phenology data. The USA-NPN’s IMS can serve phenology assessments directly, through data management and indirectly as a model for large-scale integrated data management.

  11. Developing a database management system to support birth defects surveillance in Florida.

    PubMed

    Salemi, Jason L; Hauser, Kimberlea W; Tanner, Jean Paul; Sampat, Diana; Correia, Jane A; Watkins, Sharon M; Kirby, Russell S

    2010-01-01

    The value of any public health surveillance program is derived from the ways in which data are managed and used to improve the public's health. Although birth defects surveillance programs vary in their case volume, budgets, staff, and objectives, the capacity to operate efficiently and maximize resources remains critical to long-term survival. The development of a fully-integrated relational database management system (DBMS) can enrich a surveillance program's data and improve efficiency. To build upon the Florida Birth Defects Registry--a statewide registry relying solely on linkage of administrative datasets and unconfirmed diagnosis codes-the Florida Department of Health provided funding to the University of South Florida to develop and pilot an enhanced surveillance system in targeted areas with a more comprehensive approach to case identification and diagnosis confirmation. To manage operational and administrative complexities, a DBMS was developed, capable of managing transmission of project data from multiple sources, tracking abstractor time during record reviews, offering tools for defect coding and case classification, and providing reports to DBMS users. Since its inception, the DBMS has been used as part of our surveillance projects to guide the receipt of over 200 case lists and review of 12,924 fetuses and infants (with associated maternal records) suspected of having selected birth defects in over 90 birthing and transfer facilities in Florida. The DBMS has provided both anticipated and unexpected benefits. Automation of the processes for managing incoming case lists has reduced clerical workload considerably, while improving accuracy of working lists for field abstraction. Data quality has improved through more effective use of internal edits and comparisons with values for other data elements, while simultaneously increasing abstractor efficiency in completion of case abstraction. We anticipate continual enhancement to the DBMS in the future. While we have focused on enhancing the capacity of our DBMS for birth defects surveillance, many of the tools and approaches we have developed translate directly to other public health and clinical registries.

  12. Image BOSS: a biomedical object storage system

    NASA Astrophysics Data System (ADS)

    Stacy, Mahlon C.; Augustine, Kurt E.; Robb, Richard A.

    1997-05-01

    Researchers using biomedical images have data management needs which are oriented perpendicular to clinical PACS. The image BOSS system is designed to permit researchers to organize and select images based on research topic, image metadata, and a thumbnail of the image. Image information is captured from existing images in a Unix based filesystem, stored in an object oriented database, and presented to the user in a familiar laboratory notebook metaphor. In addition, the ImageBOSS is designed to provide an extensible infrastructure for future content-based queries directly on the images.

  13. Implementing Relational Operations in an Object-Oriented Database

    DTIC Science & Technology

    1992-03-01

    computer aided software engineering (CASE) and computer aided design (CAD) tools. There has been some research done in the area of combining...35 2. Prograph Database Engine .................................................................. 38 III. W HY A N R/O...in most business applications where the bulk of data being stored and manipulated is simply textual or numeric data that can be stored and manipulated

  14. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  15. The role of acupuncture in managing overactive bladder; a review of the literature.

    PubMed

    Forde, James C; Jaffe, Edward; Stone, Benjamin V; Te, Alexis E; Espinosa, Geo; Chughtai, Bilal

    2016-11-01

    Overactive bladder (OAB) affects a considerable proportion of men and women in the United States and is associated with significant costs and quality of life (QoL) reduction. While medication remains a mainstay of treatment, there is increasing interest in the use of alternative medicine in the form of acupuncture. We reviewed the literature on the role of acupuncture in managing OAB. A narrative review was compiled after searching electronic databases (PubMed, MEDLINE, Scopus, and EMBASE) for clinical studies involving acupuncture in treating OAB. Databases were searched from the time of inception through September 2015 by a clinician for articles reporting the results related to the use of acupuncture in OAB. Key search terms were acupuncture, overactive bladder, bladder instability, urgency, urinary incontinence. Articles in English or translated into English were included. Initial animal studies suggest several biochemical mechanisms of action underlying the effect of acupuncture on OAB suppression. The experience in humans includes two case series and six comparative trials. All studies demonstrated subjective improvement in OAB symptoms, and some reported objective improvement in urodynamic studies. Notably, some comparative trials showed the benefit of acupuncture to be comparable with antimuscarinic treatment. Despite their limitations, existing studies serve as a promising foundation for suggesting a role for acupuncture as an alternative therapy for OAB. Further well-designed studies are required to investigate optimal technique and their outcomes.

  16. SPINS: standardized protein NMR storage. A data dictionary and object-oriented relational database for archiving protein NMR spectra.

    PubMed

    Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T

    2002-10-01

    Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.

  17. Buckets: Smart Objects for Digital Libraries

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.

    2001-01-01

    Current discussion of digital libraries (DLs) is often dominated by the merits of the respective storage, search and retrieval functionality of archives, repositories, search engines, search interfaces and database systems. While these technologies are necessary for information management, the information content is more important than the systems used for its storage and retrieval. Digital information should have the same long-term survivability prospects as traditional hardcopy information and should be protected to the extent possible from evolving search engine technologies and vendor vagaries in database management systems. Information content and information retrieval systems should progress on independent paths and make limited assumptions about the status or capabilities of the other. Digital information can achieve independence from archives and DL systems through the use of buckets. Buckets are an aggregative, intelligent construct for publishing in DLs. Buckets allow the decoupling of information content from information storage and retrieval. Buckets exist within the Smart Objects and Dumb Archives model for DLs in that many of the functionalities and responsibilities traditionally associated with archives are pushed down (making the archives dumber) into the buckets (making them smarter). Some of the responsibilities imbued to buckets are the enforcement of their terms and conditions, and maintenance and display of their contents.

  18. Examining Factors of Engagement With Digital Interventions for Weight Management: Rapid Review

    PubMed Central

    2017-01-01

    Background Digital interventions for weight management provide a unique opportunity to target daily lifestyle choices and eating behaviors over a sustained period of time. However, recent evidence has demonstrated a lack of user engagement with digital health interventions, impacting on the levels of intervention effectiveness. Thus, it is critical to identify the factors that may facilitate user engagement with digital health interventions to encourage behavior change and weight management. Objective The aim of this study was to identify and synthesize the available evidence to gain insights about users’ perspectives on factors that affect engagement with digital interventions for weight management. Methods A rapid review methodology was adopted. The search strategy was executed in the following databases: Web of Science, PsycINFO, and PubMed. Studies were eligible for inclusion if they investigated users’ engagement with a digital weight management intervention and were published from 2000 onwards. A narrative synthesis of data was performed on all included studies. Results A total of 11 studies were included in the review. The studies were qualitative, mixed-methods, or randomized controlled trials. Some of the studies explored features influencing engagement when using a Web-based digital intervention, others specifically explored engagement when accessing a mobile phone app, and some looked at engagement after text message (short message service, SMS) reminders. Factors influencing engagement with digital weight management interventions were found to be both user-related (eg, perceived health benefits) and digital intervention–related (eg, ease of use and the provision of personalized information). Conclusions The findings highlight the importance of incorporating user perspectives during the digital intervention development process to encourage engagement. The review contributes to our understanding of what facilitates user engagement and points toward a coproduction approach for developing digital interventions for weight management. Particularly, it highlights the importance of thinking about user-related and digital tool–related factors from the very early stages of the intervention development process. PMID:29061557

  19. A comparison of traditional anti-inflammation and anti-infection medicinal plants with current evidence from biomedical research: Results from a regional study

    PubMed Central

    Vieira, A.

    2010-01-01

    Background: In relation to pharmacognosy, an objective of many ethnobotanical studies is to identify plant species to be further investigated, for example, tested in disease models related to the ethnomedicinal application. To further warrant such testing, research evidence for medicinal applications of these plants (or of their major phytochemical constituents and metabolic derivatives) is typically analyzed in biomedical databases. Methods: As a model of this process, the current report presents novel information regarding traditional anti-inflammation and anti-infection medicinal plant use. This information was obtained from an interview-based ethnobotanical study; and was compared with current biomedical evidence using the Medline® database. Results: Of the 8 anti-infection plant species identified in the ethnobotanical study, 7 have related activities reported in the database; and of the 6 anti-inflammation plants, 4 have related activities in the database. Conclusion: Based on novel and complimentary results from the ethnobotanical and biomedical database analyses, it is suggested that some of these plants warrant additional investigation of potential anti-inflammatory or anti-infection activities in related disease models, and also additional studies in other population groups. PMID:21589754

  20. [Type 1 diabetes mellitus: evidence from the literature for appropriate management in children's perspective].

    PubMed

    Nascimento, Lucila Castanheira; Amaral, Mariana Junco; Sparapani, Valéria de Cássia; Fonseca, Luciana Mara Monti; Nunes, Michelle Darezzo Rodrigues; Dupas, Giselle

    2011-06-01

    The objective of this study was to identify the evidence available in the literature that address, for children's perspective, factors that are relevant for an appropriate management of type 1 diabetes mellitus. An integrative review was performed on the PubMed, CINAHL, LILACS, CUIDEN and PsycINFO databases, covering the period from 1998 to 2008 and using the following keywords: type 1 diabetes mellitus, child, prevention and control, triggering factors, emergencies, self care, learning and health education. Nineteen of the surveyed articles were selected, and their analysis revealed the following categories: living with diabetes; self care and glucose profile; the actions of family, friends and health professionals; and school. The evidence show that children appreciate the support they receive from their relatives, which have a direct relationship with being prepared for self care. Other members apart from their network are also valued. Areas that deserve attention are the school, the personal experience of each child, and health education.

Top