Sample records for oracle relational database

  1. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    NASA Astrophysics Data System (ADS)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  2. Network Configuration of Oracle and Database Programming Using SQL

    NASA Technical Reports Server (NTRS)

    Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.

    2000-01-01

    A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).

  3. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  4. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  5. Cryptanalysis of Password Protection of Oracle Database Management System (DBMS)

    NASA Astrophysics Data System (ADS)

    Koishibayev, Timur; Umarova, Zhanat

    2016-04-01

    This article discusses the currently available encryption algorithms in the Oracle database, also the proposed upgraded encryption algorithm, which consists of 4 steps. In conclusion we make an analysis of password encryption of Oracle Database.

  6. OrChem - An open source chemistry search engine for Oracle(R).

    PubMed

    Rijnbeek, Mark; Steinbeck, Christoph

    2009-10-22

    Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.

  7. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  8. Performance assessment of EMR systems based on post-relational database.

    PubMed

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  9. A case study for a digital seabed database: Bohai Sea engineering geology database

    NASA Astrophysics Data System (ADS)

    Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang

    2006-07-01

    This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.

  10. Quantum search of a real unstructured database

    NASA Astrophysics Data System (ADS)

    Broda, Bogusław

    2016-02-01

    A simple circuit implementation of the oracle for Grover's quantum search of a real unstructured classical database is proposed. The oracle contains a kind of quantumly accessible classical memory, which stores the database.

  11. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  12. OrChem - An open source chemistry search engine for Oracle®

    PubMed Central

    2009-01-01

    Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521

  13. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  14. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences.

    PubMed

    Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.

  15. Oracle Applications Patch Administration Tool (PAT) Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2002-01-04

    PAT is a Patch Administration Tool that provides analysis, tracking, and management of Oracle Application patches. This includes capabilities as outlined below: Patch Analysis & Management Tool Outline of capabilities: Administration Patch Data Maintenance -- track Oracle Application patches applied to what database instance & machine Patch Analysis capture text files (readme.txt and driver files) form comparison detail report comparison detail PL/SQL package comparison detail SQL scripts detail JSP module comparison detail Parse and load the current applptch.txt (10.7) or load patch data from Oracle Application database patch tables (11i) Display Analysis -- Compare patch to be applied with currentmore » Oracle Application installed Appl_top code versions Patch Detail Module comparison detail Analyze and display one Oracle Application module patch. Patch Management -- automatic queue and execution of patches Administration Parameter maintenance -- setting for directory structure of Oracle Application appl_top Validation data maintenance -- machine names and instances to patch Operation Patch Data Maintenance Schedule a patch (queue for later execution) Run a patch (queue for immediate execution) Review the patch logs Patch Management Reports« less

  16. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences

    PubMed Central

    Stephens, Susie M.; Chen, Jake Y.; Davidson, Marcel G.; Thomas, Shiby; Trute, Barry M.

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html PMID:15608287

  17. Performance related issues in distributed database systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  18. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  19. Joint Battlespace Infosphere: Information Management Within a C2 Enterprise

    DTIC Science & Technology

    2005-06-01

    using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing

  20. C-A1-03: Considerations in the Design and Use of an Oracle-based Virtual Data Warehouse

    PubMed Central

    Bredfeldt, Christine; McFarland, Lela

    2011-01-01

    Background/Aims The amount of clinical data available for research is growing exponentially. As it grows, increasing the efficiency of both data storage and data access becomes critical. Relational database management systems (rDBMS) such as Oracle are ideal solutions for managing longitudinal clinical data because they support large-scale data storage and highly efficient data retrieval. In addition, they can greatly simplify the management of large data warehouses, including security management and regular data refreshes. However, the HMORN Virtual Data Warehouse (VDW) was originally designed based on SAS datasets, and this design choice has a number of implications for both the design and use of an Oracle-based VDW. From a design standpoint, VDW tables are designed as flat SAS datasets, which do not take full advantage of Oracle indexing capabilities. From a data retrieval standpoint, standard VDW SAS scripts do not take advantage of SAS pass-through SQL capabilities to enable Oracle to perform the processing required to narrow datasets to the population of interest. Methods Beginning in 2009, the research department at Kaiser Permanente in the Mid-Atlantic States (KPMA) has developed an Oracle-based VDW according to the HMORN v3 specifications. In order to take advantage of the strengths of relational databases, KPMA introduced an interface layer to the VDW data, using views to provide access to standardized VDW variables. In addition, KPMA has developed SAS programs that provide access to SQL pass-through processing for first-pass data extraction into SAS VDW datasets for processing by standard VDW scripts. Results We discuss both the design and performance considerations specific to the KPMA Oracle-based VDW. We benchmarked performance of the Oracle-based VDW using both standard VDW scripts and an initial pre-processing layer to evaluate speed and accuracy of data return. Conclusions Adapting the VDW for deployment in an Oracle environment required minor changes to the underlying structure of the data. Further modifications of the underlying data structure would lead to performance enhancements. Maximally efficient data access for standard VDW scripts requires an extra step that involves restricting the data to the population of interest at the data server level prior to standard processing.

  1. Database interfaces on NASA's heterogeneous distributed database system

    NASA Technical Reports Server (NTRS)

    Huang, S. H. S.

    1986-01-01

    The purpose of the ORACLE interface is to enable the DAVID program to submit queries and transactions to databases running under the ORACLE DBMS. The interface package is made up of several modules. The progress of these modules is described below. The two approaches used in implementing the interface are also discussed. Detailed discussion of the design of the templates is shown and concluding remarks are presented.

  2. Managing vulnerabilities and achieving compliance for Oracle databases in a modern ERP environment

    NASA Astrophysics Data System (ADS)

    Hölzner, Stefan; Kästle, Jan

    In this paper we summarize good practices on how to achieve compliance for an Oracle database in combination with an ERP system. We use an integrated approach to cover both the management of vulnerabilities (preventive measures) and the use of logging and auditing features (detective controls). This concise overview focusses on the combination Oracle and SAP and it’s dependencies, but also outlines security issues that arise with other ERP systems. Using practical examples, we demonstrate common vulnerabilities and coutermeasures as well as guidelines for the use of auditing features.

  3. Data warehousing with Oracle

    NASA Astrophysics Data System (ADS)

    Shahzad, Muhammad A.

    1999-02-01

    With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.

  4. Event Driven Messaging with Role-Based Subscriptions

    NASA Technical Reports Server (NTRS)

    Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed

    2009-01-01

    Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).

  5. Database Migration for Command and Control

    DTIC Science & Technology

    2002-11-01

    Sql - proprietary JDP Private Area Air defense data Defended asset list Oracle 7.3.2 - Automated process (OLTP...TADIL warnings Oracle 7.3.2 Flat File - Discrete transaction with data upds - NRT response required Pull mission data Std SQL ...level execution data Oracle 7.3 User update External interfaces Auto/manual backup Messaging Proprietary replication (internally) SQL Web server

  6. Integrating Radar Image Data with Google Maps

    NASA Technical Reports Server (NTRS)

    Chapman, Bruce D.; Gibas, Sarah

    2010-01-01

    A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.

  7. Call-Center Based Disease Management of Pediatric Asthmatics

    DTIC Science & Technology

    2006-04-01

    This study will measure the impact of CBDMP, which promotes patient education and empowerment, on multiple factors to include; patient/caregiver quality...Prepare and reproduce patient education materials, and informed consent work sheets. Contract Oracle data base administrator to establish database for... Patient education materials and informed consent documents were reproduced. A web-based Oracle data-base was determined to be both prohibitively

  8. Artificial intelligence, neural network, and Internet tool integration in a pathology workstation to improve information access

    NASA Astrophysics Data System (ADS)

    Sargis, J. C.; Gray, W. A.

    1999-03-01

    The APWS allows user friendly access to several legacy systems which would normally each demand domain expertise for proper utilization. The generalized model, including objects, classes, strategies and patterns is presented. The core components of the APWS are the Microsoft Windows 95 Operating System, Oracle, Oracle Power Objects, Artificial Intelligence tools, a medical hyperlibrary and a web site. The paper includes a discussion of how could be automated by taking advantage of the expert system, object oriented programming and intelligent relational database tools within the APWS.

  9. Sequential data access with Oracle and Hadoop: a performance comparison

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Canali, Luca; Grancher, Eric

    2014-06-01

    The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.

  10. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  11. The Battle Command Sustainment Support System: Initial Analysis Report

    DTIC Science & Technology

    2016-09-01

    diagnostic monitoring, asynchronous commits, and others. The other components of the NEDP include a main forwarding gateway /web server and one or more...NATIONAL ENTERPRISE DATA PORTAL ANALYSIS The NEDP is comprised of an Oracle Database 10g referred to as the National Data Server and several other...data forwarding gateways (DFG). Together, with the Oracle Database 10g, these components provide a heterogeneous data source that aligns various data

  12. Experience in running relational databases on clustered storage

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Potocky, Miroslav

    2015-12-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  13. An approach in building a chemical compound search engine in oracle database.

    PubMed

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  14. Service Management Database for DSN Equipment

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.

  15. Design and implementation of a biomedical image database (BDIM).

    PubMed

    Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R

    1988-01-01

    We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. [Application of the life sciences platform based on oracle to biomedical informations].

    PubMed

    Zhao, Zhi-Yun; Li, Tai-Huan; Yang, Hong-Qiao

    2008-03-01

    The life sciences platform based on Oracle database technology is introduced in this paper. By providing a powerful data access, integrating a variety of data types, and managing vast quantities of data, the software presents a flexible, safe and scalable management platform for biomedical data processing.

  17. An Investigation of the Fine Spatial Structure of Meteor Streams Using the Relational Database ``Meteor''

    NASA Astrophysics Data System (ADS)

    Karpov, A. V.; Yumagulov, E. Z.

    2003-05-01

    We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Mathew; Bowen, Brian; Coles, Dwight

    The Middleware Automated Deployment Utilities consists the these three components: MAD: Utility designed to automate the deployment of java applications to multiple java application servers. The product contains a front end web utility and backend deployment scripts. MAR: Web front end to maintain and update the components inside database. MWR-Encrypt: Web utility to convert a text string to an encrypted string that is used by the Oracle Weblogic application server. The encryption is done using the built in functions if the Oracle Weblogic product and is mainly used to create an encrypted version of a database password.

  19. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  20. A web-based platform for virtual screening.

    PubMed

    Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J

    2003-09-01

    A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.

  1. Evolution of the architecture of the ATLAS Metadata Interface (AMI)

    NASA Astrophysics Data System (ADS)

    Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.

  2. An Extensible "SCHEMA-LESS" Database Framework for Managing High-Throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  3. An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.

  4. The development of ORACLe: a measure of an organisation's capacity to engage in evidence-informed health policy.

    PubMed

    Makkar, Steve R; Turner, Tari; Williamson, Anna; Louviere, Jordan; Redman, Sally; Haynes, Abby; Green, Sally; Brennan, Sue

    2016-01-14

    Evidence-informed policymaking is more likely if organisations have cultures that promote research use and invest in resources that facilitate staff engagement with research. Measures of organisations' research use culture and capacity are needed to assess current capacity, identify opportunities for improvement, and examine the impact of capacity-building interventions. The aim of the current study was to develop a comprehensive system to measure and score organisations' capacity to engage with and use research in policymaking, which we entitled ORACLe (Organisational Research Access, Culture, and Leadership). We used a multifaceted approach to develop ORACLe. Firstly, we reviewed the available literature to identify key domains of organisational tools and systems that may facilitate research use by staff. We interviewed senior health policymakers to verify the relevance and applicability of these domains. This information was used to generate an interview schedule that focused on seven key domains of organisational capacity. The interview was pilot-tested within four Australian policy agencies. A discrete choice experiment (DCE) was then undertaken using an expert sample to establish the relative importance of these domains. This data was used to produce a scoring system for ORACLe. The ORACLe interview was developed, comprised of 23 questions addressing seven domains of organisational capacity and tools that support research use, including (1) documented processes for policymaking; (2) leadership training; (3) staff training; (4) research resources (e.g. database access); and systems to (5) generate new research, (6) undertake evaluations, and (7) strengthen relationships with researchers. From the DCE data, a conditional logit model was estimated to calculate total scores that took into account the relative importance of the seven domains. The model indicated that our expert sample placed the greatest importance on domains (2), (3) and (4). We utilised qualitative and quantitative methods to develop a system to assess and score organisations' capacity to engage with and apply research to policy. Our measure assesses a broad range of capacity domains and identifies the relative importance of these capacities. ORACLe data can be used by organisations keen to increase their use of evidence to identify areas for further development.

  5. Modernizing the MagIC Paleomagnetic and Rock Magnetic Database Technology Stack to Encourage Code Reuse and Reproducible Science

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2016-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.

  6. Construction of a Linux based chemical and biological information system.

    PubMed

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  7. An SQL query generator for CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Chirica, Laurian

    1990-01-01

    As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.

  8. Multi atlas based segmentation: Should we prefer the best atlas group over the group of best atlases?

    PubMed

    Zaffino, Paolo; Ciardo, Delia; Raudaschl, Patrik; Fritscher, Karl; Ricotti, Rosalinda; Alterio, Daniela; Marvaso, Giulia; Fodor, Cristiana; Baroni, Guido; Amato, Francesco; Orecchia, Roberto; Jereczek-Fossa, Barbara Alicja; Sharp, Gregory C; Spadea, Maria Francesca

    2018-05-22

    Multi Atlas Based Segmentation (MABS) uses a database of atlas images, and an atlas selection process is used to choose an atlas subset for registration and voting. In the current state of the art, atlases are chosen according to a similarity criterion between the target subject and each atlas in the database. In this paper, we propose a new concept for atlas selection that relies on selecting the best performing group of atlases rather than the group of highest scoring individual atlases. Experiments were performed using CT images of 50 patients, with contours of brainstem and parotid glands. The dataset was randomly split in 2 groups: 20 volumes were used as an atlas database and 30 served as target subjects for testing. Classic oracle group selection, where atlases are chosen by the highest Dice Similarity Coefficient (DSC) with the target, was performed. This was compared to oracle Group selection, where all the combinations of atlas subgroups were considered and scored by computing DSC with the target subject. Subsequently, Convolutional Neural Networks (CNNs) were designed to predict the best group of atlases. The results were compared also with the selection strategy based on Normalized Mutual Information (NMI). Oracle group was proved to be significantly better that classic oracle selection (p<10-5). Atlas group selection led to a median±interquartile DSC of 0.740±0.084, 0.718±0.086 and 0.670±0.097 for brainstem and left/right parotid glands respectively, outperforming NMI selection 0.676±0.113, 0.632±0.104 and 0.606±0.118 (p<0.001) as well as classic oracle selection. The implemented methodology is a proof of principle that selecting the atlases by considering the performance of the entire group of atlases instead of each single atlas leads to higher segmentation accuracy, being even better then current oracle strategy. This finding opens a new discussion about the most appropriate atlas selection criterion for MABS. © 2018 Institute of Physics and Engineering in Medicine.

  9. Integration and management of massive remote-sensing data based on GeoSOT subdivision model

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Cheng, Chengqi; Chen, Bo; Meng, Li

    2016-07-01

    Owing to the rapid development of earth observation technology, the volume of spatial information is growing rapidly; therefore, improving query retrieval speed from large, rich data sources for remote-sensing data management systems is quite urgent. A global subdivision model, geographic coordinate subdivision grid with one-dimension integer coding on 2n-tree, which we propose as a solution, has been used in data management organizations. However, because a spatial object may cover several grids, ample data redundancy will occur when data are stored in relational databases. To solve this redundancy problem, we first combined the subdivision model with the spatial array database containing the inverted index. We proposed an improved approach for integrating and managing massive remote-sensing data. By adding a spatial code column in an array format in a database, spatial information in remote-sensing metadata can be stored and logically subdivided. We implemented our method in a Kingbase Enterprise Server database system and compared the results with the Oracle platform by simulating worldwide image data. Experimental results showed that our approach performed better than Oracle in terms of data integration and time and space efficiency. Our approach also offers an efficient storage management system for existing storage centers and management systems.

  10. DEVELOPMENT AND APPLICATION OF THE DORIAN (DOSE-RESPONSE INFORMATION ANALYSIS) SYSTEM

    EPA Science Inventory

    • Migration of ArrayTrack from the proprietary Oracle database to open source Postgres database.
    • Making the public version of the ebKB available with provisions for soliciting input from collaborators and outside users.
    • Continued development ...

    • Web based tools for data manipulation, visualisation and validation with interactive georeferenced graphs

      NASA Astrophysics Data System (ADS)

      Ivankovic, D.; Dadic, V.

      2009-04-01

      Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.

    • Database Systems and Oracle: Experiences and Lessons Learned

      ERIC Educational Resources Information Center

      Dunn, Deborah

      2005-01-01

      In a tight job market, IT professionals with database experience are likely to be in great demand. Companies need database personnel who can help improve access to and security of data. The events of September 11 have increased business' awareness of the need for database security, backup, and recovery procedures. It is our responsibility to…

    • The importance of having an appropriate relational data segmentation in ATLAS

      NASA Astrophysics Data System (ADS)

      Dimitrov, G.

      2015-12-01

      In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience.

    • DOE Office of Scientific and Technical Information (OSTI.GOV)

      Fisher,D

      Concerns about the long-term viability of SFS as the metadata store for HPSS have been increasing. A concern that Transarc may discontinue support for SFS motivates us to consider alternative means to store HPSS metadata. The obvious alternative is a commercial database. Commercial databases have the necessary characteristics for storage of HPSS metadata records. They are robust and scalable and can easily accommodate the volume of data that must be stored. They provide programming interfaces, transactional semantics and a full set of maintenance and performance enhancement tools. A team was organized within the HPSS project to study and recommend anmore » approach for the replacement of SFS. Members of the team are David Fisher, Jim Minton, Donna Mecozzi, Danny Cook, Bart Parliman and Lynn Jones. We examined several possible solutions to the problem of replacing SFS, and recommended on May 22, 2000, in a report to the HPSS Technical and Executive Committees, to change HPSS into a database application over either Oracle or DB2. We recommended either Oracle or DB2 on the basis of market share and technical suitability. Oracle and DB2 are dominant offerings in the market, and it is in the best interest of HPSS to use a major player's product. Both databases provide a suitable programming interface. Transaction management functions, support for multi-threaded clients and data manipulation languages (DML) are available. These findings were supported in meetings held with technical experts from both companies. In both cases, the evidence indicated that either database would provide the features needed to host HPSS.« less

    • Software Application for Supporting the Education of Database Systems

      ERIC Educational Resources Information Center

      Vágner, Anikó

      2015-01-01

      The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

    • Fermilab Security Site Access Request Database

      Science.gov Websites

      Fermilab Security Site Access Request Database Use of the online version of the Fermilab Security Site Access Request Database requires that you login into the ESH&Q Web Site. Note: Only Fermilab generated from the ESH&Q Section's Oracle database on May 27, 2018 05:48 AM. If you have a question

    • Survey Software Evaluation

      DTIC Science & Technology

      2009-01-01

      Oracle 9i, 10g  MySQL  MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server  Windows 2000 Server (32 bit...WebStar (Mac OS X)  SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server  MS SQL Server  Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular

    • Information integration for a sky survey by data warehousing

      NASA Astrophysics Data System (ADS)

      Luo, A.; Zhang, Y.; Zhao, Y.

      The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line

    • An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

      NASA Technical Reports Server (NTRS)

      Connell, Andrea M.

      2011-01-01

      The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

    • The intelligent database machine

      NASA Technical Reports Server (NTRS)

      Yancey, K. E.

      1985-01-01

      The IDM data base was compared with the data base crack to determine whether IDM 500 would better serve the needs of the MSFC data base management system than Oracle. The two were compared and the performance of the IDM was studied. Implementations that work best on which database are implicated. The choice is left to the database administrator.

  1. Methods and apparatus for constructing and implementing a universal extension module for processing objects in a database

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)

    2004-01-01

    Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.

  2. NASA Electronic Library System (NELS) database schema, version 1.2

    NASA Technical Reports Server (NTRS)

    Melebeck, Clovis J.

    1991-01-01

    The database tables used by NELS version 1.2 are discussed. To provide the current functional capability offered by NELS, nineteen tables were created with ORACLE. Each table lists the ORACLE table name and provides a brief description of the tables intended use or function. The following sections cover four basic categories of tables: NELS object classes, NELS collections, NELS objects, and NELS supplemental tables. Also included in each section is a definition and/or relationship of each field to other fields or tables. The primary key(s) for each table is indicated with a single asterisk (*), while foreign keys are indicated with double asterisks (**). The primary key(s) indicate the key(s) which uniquely identifies a record for that table. The foreign key(s) is used to identify additional information in other table(s) for that record. The two appendices are the command which is used to construct the ORACLE tables for NELS. Appendix A contains the commands which create the tables which are defined in the following sections. Appendix B contains the commands which build the indices for these tables.

  3. PS3-21: Extracting Utilization Data from Clarity into VDW Using Oracle and SAS

    PubMed Central

    Chimmula, Srivardhan

    2013-01-01

    Background/Aims The purpose of the presentation is to demonstrate how we use SAS and Oracle to load VDW_Utilization, VDW_DX, and VDW_PX tables from Clarity at the Kaiser Permanente Northern California (KPNC) Division of Research (DOR) site. Methods DOR uses the best of Oracle PL/ SQL and SAS capabilities in building Extract Transform and Load (ETL) processes. These processes extract patient encounter, diagnosis, and procedure data from Teradata-based Clarity. The data is then transformed to fit HMORN’s VDW definitions of the table. This data is then loaded into the Oracle-based VDW table on DOR’s research database and then finally a copy of the table is also created as a SAS dataset. Results DOR builds robust and efficient ETL processes that refresh VDW Utilization table on a monthly basis processing millions of records/observations. The ETL processes have the capability to identify daily changes in Clarity and update the VDW tables on a daily basis. Conclusions KPNC DOR combines the best of both Oracle and SAS worlds to build ETL processes that load the data into VDW Utilization tables efficiently.

  4. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  5. FISH Oracle: a web server for flexible visualization of DNA copy number data in a genomic context.

    PubMed

    Mader, Malte; Simon, Ronald; Steinbiss, Sascha; Kurtz, Stefan

    2011-07-28

    The rapidly growing amount of array CGH data requires improved visualization software supporting the process of identifying candidate cancer genes. Optimally, such software should work across multiple microarray platforms, should be able to cope with data from different sources and should be easy to operate. We have developed a web-based software FISH Oracle to visualize data from multiple array CGH experiments in a genomic context. Its fast visualization engine and advanced web and database technology supports highly interactive use. FISH Oracle comes with a convenient data import mechanism, powerful search options for genomic elements (e.g. gene names or karyobands), quick navigation and zooming into interesting regions, and mechanisms to export the visualization into different high quality formats. These features make the software especially suitable for the needs of life scientists. FISH Oracle offers a fast and easy to use visualization tool for array CGH and SNP array data. It allows for the identification of genomic regions representing minimal common changes based on data from one or more experiments. FISH Oracle will be instrumental to identify candidate onco and tumor suppressor genes based on the frequency and genomic position of DNA copy number changes. The FISH Oracle application and an installed demo web server are available at http://www.zbh.uni-hamburg.de/fishoracle.

  6. FISH Oracle: a web server for flexible visualization of DNA copy number data in a genomic context

    PubMed Central

    2011-01-01

    Background The rapidly growing amount of array CGH data requires improved visualization software supporting the process of identifying candidate cancer genes. Optimally, such software should work across multiple microarray platforms, should be able to cope with data from different sources and should be easy to operate. Results We have developed a web-based software FISH Oracle to visualize data from multiple array CGH experiments in a genomic context. Its fast visualization engine and advanced web and database technology supports highly interactive use. FISH Oracle comes with a convenient data import mechanism, powerful search options for genomic elements (e.g. gene names or karyobands), quick navigation and zooming into interesting regions, and mechanisms to export the visualization into different high quality formats. These features make the software especially suitable for the needs of life scientists. Conclusions FISH Oracle offers a fast and easy to use visualization tool for array CGH and SNP array data. It allows for the identification of genomic regions representing minimal common changes based on data from one or more experiments. FISH Oracle will be instrumental to identify candidate onco and tumor suppressor genes based on the frequency and genomic position of DNA copy number changes. The FISH Oracle application and an installed demo web server are available at http://www.zbh.uni-hamburg.de/fishoracle. PMID:21884636

  7. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valassi, A.; /CERN; Bartoldus, R.

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farmmore » of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.« less

  8. Visualization and manipulating the image of a formal data structure (FDS)-based database

    NASA Astrophysics Data System (ADS)

    Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien

    1994-08-01

    A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.

  9. Develop a Prototype Personal Health Record Application (PHR-A) that Captures Information About Daily Living Important for Diabetes and Provides Decision Support with Actionable Advice for Diabetes Self Care

    DTIC Science & Technology

    2012-10-01

    higher  Java v5Apache Struts v2  Hibernate v2  C3PO  SQL*Net client / JDBC Database Server  Oracle 10.0.2 Desktop Client  Internet Explorer...for mobile Smartphones - A Java -based framework utilizing Apache Struts on the server - Relational database to handle data storage requirements B...technologies are as follows: Technology Use Requirements Java Application Provides the backend application software to drive the PHR-A 7 BEA Web

  10. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    NASA Technical Reports Server (NTRS)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  11. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  12. Software Engineering Laboratory (SEL) database organization and user's guide

    NASA Technical Reports Server (NTRS)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  13. Clinical Databases for Chest Physicians.

    PubMed

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  14. Heterogeneous distributed databases: A case study

    NASA Technical Reports Server (NTRS)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  15. A Framework For Fault Tolerance In Virtualized Servers

    DTIC Science & Technology

    2016-06-01

    effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data

  16. Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model

    DTIC Science & Technology

    2014-04-01

    time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest

  17. Multi-Resolution Playback of Network Trace Files

    DTIC Science & Technology

    2015-06-01

    a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3

  18. The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

    NASA Astrophysics Data System (ADS)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo

    2015-05-01

    The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.

  19. Big data mining: In-database Oracle data mining over hadoop

    NASA Astrophysics Data System (ADS)

    Kovacheva, Zlatinka; Naydenova, Ina; Kaloyanova, Kalinka; Markov, Krasimir

    2017-07-01

    Big data challenges different aspects of storing, processing and managing data, as well as analyzing and using data for business purposes. Applying Data Mining methods over Big Data is another challenge because of huge data volumes, variety of information, and the dynamic of the sources. Different applications are made in this area, but their successful usage depends on understanding many specific parameters. In this paper we present several opportunities for using Data Mining techniques provided by the analytical engine of RDBMS Oracle over data stored in Hadoop Distributed File System (HDFS). Some experimental results are given and they are discussed.

  20. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  1. Space Station Freedom environmental database system (FEDS) for MSFC testing

    NASA Technical Reports Server (NTRS)

    Story, Gail S.; Williams, Wendy; Chiu, Charles

    1991-01-01

    The Water Recovery Test (WRT) at Marshall Space Flight Center (MSFC) is the first demonstration of integrated water recovery systems for potable and hygiene water reuse as envisioned for Space Station Freedom (SSF). In order to satisfy the safety and health requirements placed on the SSF program and facilitate test data assessment, an extensive laboratory analysis database was established to provide a central archive and data retrieval function. The database is required to store analysis results for physical, chemical, and microbial parameters measured from water, air and surface samples collected at various locations throughout the test facility. The Oracle Relational Database Management System (RDBMS) was utilized to implement a secured on-line information system with the ECLSS WRT program as the foundation for this system. The database is supported on a VAX/VMS 8810 series mainframe and is accessible from the Marshall Information Network System (MINS). This paper summarizes the database requirements, system design, interfaces, and future enhancements.

  2. GEOGRAPHIC INFORMATION SYSTEM APPROACH FOR PLAY PORTFOLIOS TO IMPROVE OIL PRODUCTION IN THE ILLINOIS BASIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beverly Seyler; John Grube

    2004-12-10

    Oil and gas have been commercially produced in Illinois for over 100 years. Existing commercial production is from more than fifty-two named pay horizons in Paleozoic rocks ranging in age from Middle Ordovician to Pennsylvanian. Over 3.2 billion barrels of oil have been produced. Recent calculations indicate that remaining mobile resources in the Illinois Basin may be on the order of several billion barrels. Thus, large quantities of oil, potentially recoverable using current technology, remain in Illinois oil fields despite a century of development. Many opportunities for increased production may have been missed due to complex development histories, multiple stackedmore » pays, and commingled production which makes thorough exploitation of pays and the application of secondary or improved/enhanced recovery strategies difficult. Access to data, and the techniques required to evaluate and manage large amounts of diverse data are major barriers to increased production of critical reserves in the Illinois Basin. These constraints are being alleviated by the development of a database access system using a Geographic Information System (GIS) approach for evaluation and identification of underdeveloped pays. The Illinois State Geological Survey has developed a methodology that is being used by industry to identify underdeveloped areas (UDAs) in and around petroleum reservoirs in Illinois using a GIS approach. This project utilizes a statewide oil and gas Oracle{reg_sign} database to develop a series of Oil and Gas Base Maps with well location symbols that are color-coded by producing horizon. Producing horizons are displayed as layers and can be selected as separate or combined layers that can be turned on and off. Map views can be customized to serve individual needs and page size maps can be printed. A core analysis database with over 168,000 entries has been compiled and assimilated into the ISGS Enterprise Oracle database. Maps of wells with core data have been generated. Data from over 1,700 Illinois waterflood units and waterflood areas have been entered into an Access{reg_sign} database. The waterflood area data has also been assimilated into the ISGS Oracle database for mapping and dissemination on the ArcIMS website. Formation depths for the Beech Creek Limestone, Ste. Genevieve Limestone and New Albany Shale in all of the oil producing region of Illinois have been calculated and entered into a digital database. Digital contoured structure maps have been constructed, edited and added to the ILoil website as map layers. This technology/methodology addresses the long-standing constraints related to information access and data management in Illinois by significantly simplifying the laborious process that industry presently must use to identify underdeveloped pay zones in Illinois.« less

  3. Design and Implementation of an Operational Database for the Fleet Area Control and Surveillance Facility, NAS North Island, San Diego, California

    DTIC Science & Technology

    1989-03-01

    deficiencies of the present system. Specific objectives critical in de,’cloping an under- standing of the system are: " Identify all knowledge workers who use...to be negligible. The limited training require- ments are primarily due to the users existing knowledge of tK2 Oracle database system, the Users

  4. A Registrar Administration System Requirements Analysis and Product Recommendation for Marine Corps University, Quantico, VA

    DTIC Science & Technology

    2011-09-01

    rate Python’s maturity as “High.” Python is nine years old and has been continuously been developed and enhanced since then. During fiscal year 2010...We rate Python’s developer toolkit availability/extensibility as “Yes.” Python runs on a SQL database and is 64 compatible with Oracle database...MODEL...........................................................................11 D. GOAL DEVELOPMENT

  5. Building Hierarchical Representations for Oracle Character and Sketch Recognition.

    PubMed

    Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui

    2016-01-01

    In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches.

  6. Supplier Management System

    NASA Technical Reports Server (NTRS)

    Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann

    2008-01-01

    Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.

  7. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    PubMed Central

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.

    2011-01-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443

  8. Over 20 years of reaction access systems from MDL: a novel reaction substructure search algorithm.

    PubMed

    Chen, Lingran; Nourse, James G; Christie, Bradley D; Leland, Burton A; Grier, David L

    2002-01-01

    From REACCS, to MDL ISIS/Host Reaction Gateway, and most recently to MDL Relational Chemistry Server, a new product based on Oracle data cartridge technology, MDL's reaction database management and retrieval systems have undergone great changes. The evolution of the system architecture is briefly discussed. The evolution of MDL reaction substructure search (RSS) algorithms is detailed. This article mainly describes a novel RSS algorithm. This algorithm is based on a depth-first search approach and is able to fully and prospectively use reaction specific information, such as reacting center and atom-atom mapping (AAM) information. The new algorithm has been used in the recently released MDL Relational Chemistry Server and allows the user to precisely find reaction instances in databases while minimizing unrelated hits. Finally, the existing and new RSS algorithms are compared with several examples.

  9. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory.

    PubMed

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M

    2011-04-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  10. Waste Information Management System v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bustamante, David G.; Schade, A. Carl

    WIMS is a functional interface to an Oracle database for managing the required regulatory information about the handling of Hazardous Waste. WIMS does not have a component to track Radiological Waste data. And it does not have the ability to manage sensitive information.

  11. An alternative database approach for management of SNOMED CT and improved patient data queries.

    PubMed

    Campbell, W Scott; Pedersen, Jay; McClay, James C; Rao, Praveen; Bastola, Dhundy; Campbell, James R

    2015-10-01

    SNOMED CT is the international lingua franca of terminologies for human health. Based in Description Logics (DL), the terminology enables data queries that incorporate inferences between data elements, as well as, those relationships that are explicitly stated. However, the ontologic and polyhierarchical nature of the SNOMED CT concept model make it difficult to implement in its entirety within electronic health record systems that largely employ object oriented or relational database architectures. The result is a reduction of data richness, limitations of query capability and increased systems overhead. The hypothesis of this research was that a graph database (graph DB) architecture using SNOMED CT as the basis for the data model and subsequently modeling patient data upon the semantic core of SNOMED CT could exploit the full value of the terminology to enrich and support advanced data querying capability of patient data sets. The hypothesis was tested by instantiating a graph DB with the fully classified SNOMED CT concept model. The graph DB instance was tested for integrity by calculating the transitive closure table for the SNOMED CT hierarchy and comparing the results with transitive closure tables created using current, validated methods. The graph DB was then populated with 461,171 anonymized patient record fragments and over 2.1 million associated SNOMED CT clinical findings. Queries, including concept negation and disjunction, were then run against the graph database and an enterprise Oracle relational database (RDBMS) of the same patient data sets. The graph DB was then populated with laboratory data encoded using LOINC, as well as, medication data encoded with RxNorm and complex queries performed using LOINC, RxNorm and SNOMED CT to identify uniquely described patient populations. A graph database instance was successfully created for two international releases of SNOMED CT and two US SNOMED CT editions. Transitive closure tables and descriptive statistics generated using the graph database were identical to those using validated methods. Patient queries produced identical patient count results to the Oracle RDBMS with comparable times. Database queries involving defining attributes of SNOMED CT concepts were possible with the graph DB. The same queries could not be directly performed with the Oracle RDBMS representation of the patient data and required the creation and use of external terminology services. Further, queries of undefined depth were successful in identifying unknown relationships between patient cohorts. The results of this study supported the hypothesis that a patient database built upon and around the semantic model of SNOMED CT was possible. The model supported queries that leveraged all aspects of the SNOMED CT logical model to produce clinically relevant query results. Logical disjunction and negation queries were possible using the data model, as well as, queries that extended beyond the structural IS_A hierarchy of SNOMED CT to include queries that employed defining attribute-values of SNOMED CT concepts as search parameters. As medical terminologies, such as SNOMED CT, continue to expand, they will become more complex and model consistency will be more difficult to assure. Simultaneously, consumers of data will increasingly demand improvements to query functionality to accommodate additional granularity of clinical concepts without sacrificing speed. This new line of research provides an alternative approach to instantiating and querying patient data represented using advanced computable clinical terminologies. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Design and Analysis of a Model Reconfigurable Cyber-Exercise Laboratory (RCEL) for Information Assurance Education

    DTIC Science & Technology

    2004-03-01

    with MySQL . This choice was made because MySQL is open source. Any significant database engine such as Oracle or MS- SQL or even MS Access can be used...10 Figure 6. The DoD vs . Commercial Life Cycle...necessarily be interested in SCADA network security 13. MySQL (Database server) – This station represents a typical data server for a web page

  13. A Data Warehouse to Support Condition Based Maintenance (CBM)

    DTIC Science & Technology

    2005-05-01

    Application ( VBA ) code sequence to import the original MAST-generated CSV and then create a single output table in DBASE IV format. The DBASE IV format...database architecture (Oracle, Sybase, MS- SQL , etc). This design includes table definitions, comments, specification of table attributes, primary and foreign...built queries and applications. Needs the application developers to construct data views. No SQL programming experience. b. Power Database User - knows

  14. Development of a Big Data Application Architecture for Navy Manpower, Personnel, Training, and Education

    DTIC Science & Technology

    2016-03-01

    science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata

  15. Image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsh, Amber; Harsch, Tim; Pitt, Julie

    2007-08-31

    The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.

  16. Review of Spatial-Database System Usability: Recommendations for the ADDNS Project

    DTIC Science & Technology

    2007-12-01

    basic GIS background information , with a closer look at spatial databases. A GIS is also a computer- based system designed to capture, manage...foundation for deploying enterprise-wide spatial information systems . According to Oracle® [18], it enables accurate delivery of location- based services...Toronto TR 2007-141 Lanter, D.P. (1991). Design of a lineage- based meta-data base for GIS. Cartography and Geographic Information Systems , 18

  17. Secure quantum private information retrieval using phase-encoded queries

    NASA Astrophysics Data System (ADS)

    Olejnik, Lukasz

    2011-08-01

    We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.100.230502 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.

  18. Secure quantum private information retrieval using phase-encoded queries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olejnik, Lukasz

    We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offersmore » substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.« less

  19. Selecting the Right Courseware for Your Online Learning Program.

    ERIC Educational Resources Information Center

    O'Mara, Heather

    2000-01-01

    Presents criteria for selecting courseware for online classes. Highlights include ease of use, including navigation; assessment tools; advantages of Java-enabled courseware; advantages of Oracle databases, including scalability; future possibilities for multimedia technology; and open architecture that will integrate with other systems. (LRW)

  20. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  1. Vector and Raster Data Storage Based on Morton Code

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  2. Computing and Communications Infrastructure for Network-Centric Warfare: Exploiting COTS, Assuring Performance

    DTIC Science & Technology

    2004-06-01

    remote databases, has seen little vendor acceptance. Each database ( Oracle , DB2, MySQL , etc.) has its own client- server protocol. Therefore each...existing standards – SQL , X.500/LDAP, FTP, etc. • View information dissemination as selective replication – State-oriented vs . message-oriented...allowing the 8 application to start. The resource management system would serve as a broker to the resources, making sure that resources are not

  3. Acoustic Metadata Management and Transparent Access to Networked Oceanographic Data Sets

    DTIC Science & Technology

    2013-09-30

    connectivity (ODBC) compliant data source for which drivers are available (e.g. MySQL , Oracle database, Postgres) can now be imported. Implementation...the possibility of speeding data transmission through compression (implemented) or the potential to use alternative data formats such as Java script

  4. Providing R-Tree Support for Mongodb

    NASA Astrophysics Data System (ADS)

    Xiang, Longgang; Shao, Xiaotian; Wang, Dehao

    2016-06-01

    Supporting large amounts of spatial data is a significant characteristic of modern databases. However, unlike some mature relational databases, such as Oracle and PostgreSQL, most of current burgeoning NoSQL databases are not well designed for storing geospatial data, which is becoming increasingly important in various fields. In this paper, we propose a novel method to provide R-tree index, as well as corresponding spatial range query and nearest neighbour query functions, for MongoDB, one of the most prevalent NoSQL databases. First, after in-depth analysis of MongoDB's features, we devise an efficient tabular document structure which flattens R-tree index into MongoDB collections. Further, relevant mechanisms of R-tree operations are issued, and then we discuss in detail how to integrate R-tree into MongoDB. Finally, we present the experimental results which show that our proposed method out-performs the built-in spatial index of MongoDB. Our research will greatly facilitate big data management issues with MongoDB in a variety of geospatial information applications.

  5. The ORACLE Children Study: educational outcomes at 11 years of age following antenatal prescription of erythromycin or co-amoxiclav

    PubMed Central

    Marlow, Neil; Bower, Hannah; Jones, David; Brocklehurst, Peter; Kenyon, Sara; Pike, Katie; Taylor, David; Salt, Alison

    2017-01-01

    Background Antibiotics used for women in spontaneous preterm labour without overt infection, in contrast to those with preterm rupture of membranes, are associated with altered functional outcomes in their children. Methods From the National Pupil Database, we used Key Stage 2 scores, national test scores in school year 6 at 11 years of age, to explore the hypothesis that erythromycin and co-amoxiclav were associated with poorer educational outcomes within the ORACLE Children Study. Results Anonymised scores for 97% of surviving children born to mothers recruited to ORACLE and resident in England were analysed against treatment group adjusting for key available socio-demographic potential confounders. No association with crude or with adjusted scores for English, mathematics or science was observed by maternal antibiotic group in either women with preterm rupture of membranes or spontaneous preterm labour with intact membranes. While the proportion receiving special educational needs was similar in each group (range 31.6–34.4%), it was higher than the national rate of 19%. Conclusions Despite evidence that antibiotics are associated with increased functional impairment at 7 years, educational test scores and special needs at 11 years of age show no differences between trial groups. Trial registration number ISCRT Number 52995660 (original ORACLE trial number). PMID:27515985

  6. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  7. ADA Implementation Issues as Discovered through a Literature Survey of Applications Outside the United States

    DTIC Science & Technology

    1992-03-01

    compile time, ensuring that operations conducted are appropriate for the object type. Each implementation requires a database known as the program...Finnish bank being developed by Nokia • Oil drilling control system managed by Sedco- Forex * Vigile - an industrial installation supervisor project by...user interface and Oracle database backend control. The software is being developed in Ada under DOD-STD-2167 under OS/2. BELGIUM BATS S.A. Project title

  8. SPINS: standardized protein NMR storage. A data dictionary and object-oriented relational database for archiving protein NMR spectra.

    PubMed

    Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T

    2002-10-01

    Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.

  9. PROTICdb: a web-based application to store, track, query, and compare plant proteome data.

    PubMed

    Ferry-Dumazet, Hélène; Houel, Gwenn; Montalent, Pierre; Moreau, Luc; Langella, Olivier; Negroni, Luc; Vincent, Delphine; Lalanne, Céline; de Daruvar, Antoine; Plomion, Christophe; Zivy, Michel; Joets, Johann

    2005-05-01

    PROTICdb is a web-based application, mainly designed to store and analyze plant proteome data obtained by two-dimensional polyacrylamide gel electrophoresis (2-D PAGE) and mass spectrometry (MS). The purposes of PROTICdb are (i) to store, track, and query information related to proteomic experiments, i.e., from tissue sampling to protein identification and quantitative measurements, and (ii) to integrate information from the user's own expertise and other sources into a knowledge base, used to support data interpretation (e.g., for the determination of allelic variants or products of post-translational modifications). Data insertion into the relational database of PROTICdb is achieved either by uploading outputs of image analysis and MS identification software, or by filling web forms. 2-D PAGE annotated maps can be displayed, queried, and compared through a graphical interface. Links to external databases are also available. Quantitative data can be easily exported in a tabulated format for statistical analyses. PROTICdb is based on the Oracle or the PostgreSQL Database Management System and is freely available upon request at the following URL: http://moulon.inra.fr/ bioinfo/PROTICdb.

  10. Efficiently Distributing Component-based Applications Across Wide-Area Environments

    DTIC Science & Technology

    2002-01-01

    a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart Maintains list of items to be bought by customer...Pet Store tests; and JBoss 3.0.3 with Jetty 4.1.0, for the RUBiS tests) and a sin- gle database server ( Oracle 8.1.7 Enterprise Edition), each running

  11. 12-Digit Watershed Boundary Data 1:24,000 for EPA Region 2 and Surrounding States (NAT_HYDROLOGY.HUC12_NRCS_REG2)

    EPA Pesticide Factsheets

    12 digit Hydrologic Units (HUCs) for EPA Region 2 and surrounding states (Northeastern states, parts of the Great Lakes, Puerto Rico and the USVI) downloaded from the Natural Resources Conservation Service (NRCS) Geospatial Gateway and imported into the EPA Region 2 Oracle/SDE database. This layer reflects 2009 updates to the national Watershed Boundary Database (WBD) that included new boundary data for New York and New Jersey.

  12. The ORACLE Children Study: educational outcomes at 11 years of age following antenatal prescription of erythromycin or co-amoxiclav.

    PubMed

    Marlow, Neil; Bower, Hannah; Jones, David; Brocklehurst, Peter; Kenyon, Sara; Pike, Katie; Taylor, David; Salt, Alison

    2017-03-01

    Antibiotics used for women in spontaneous preterm labour without overt infection, in contrast to those with preterm rupture of membranes, are associated with altered functional outcomes in their children. From the National Pupil Database, we used Key Stage 2 scores, national test scores in school year 6 at 11 years of age, to explore the hypothesis that erythromycin and co-amoxiclav were associated with poorer educational outcomes within the ORACLE Children Study. Anonymised scores for 97% of surviving children born to mothers recruited to ORACLE and resident in England were analysed against treatment group adjusting for key available socio-demographic potential confounders. No association with crude or with adjusted scores for English, mathematics or science was observed by maternal antibiotic group in either women with preterm rupture of membranes or spontaneous preterm labour with intact membranes. While the proportion receiving special educational needs was similar in each group (range 31.6-34.4%), it was higher than the national rate of 19%. Despite evidence that antibiotics are associated with increased functional impairment at 7 years, educational test scores and special needs at 11 years of age show no differences between trial groups. ISCRT Number 52995660 (original ORACLE trial number). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. Incorporating Oracle on-line space management with long-term archival technology

    NASA Technical Reports Server (NTRS)

    Moran, Steven M.; Zak, Victor J.

    1996-01-01

    The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.

  14. An Oracle-based event index for ATLAS

    NASA Astrophysics Data System (ADS)

    Gallas, E. J.; Dimitrov, G.; Vasileva, P.; Baranowski, Z.; Canali, L.; Dumitru, A.; Formica, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS Eventlndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS (relational database management system), the services we have built based on this architecture, and our experience with it. We’ve indexed about 26 billion real data events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data with other complementary metadata in ATLAS, the system has been easily extended to perform essential assessments of data integrity and completeness and to identify event duplication, including at what step in processing the duplication occurred.

  15. Database usage and performance for the Fermilab Run II experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonham, D.; Box, D.; Gallas, E.

    2004-12-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less

  16. MitoNuc: a database of nuclear genes coding for mitochondrial proteins. Update 2002.

    PubMed

    Attimonelli, Marcella; Catalano, Domenico; Gissi, Carmela; Grillo, Giorgio; Licciulli, Flavio; Liuni, Sabino; Santamaria, Monica; Pesole, Graziano; Saccone, Cecilia

    2002-01-01

    Mitochondria, besides their central role in energy metabolism, have recently been found to be involved in a number of basic processes of cell life and to contribute to the pathogenesis of many degenerative diseases. All functions of mitochondria depend on the interaction of nuclear and organelle genomes. Mitochondrial genomes have been extensively sequenced and analysed and data have been collected in several specialised databases. In order to collect information on nuclear coded mitochondrial proteins we developed MitoNuc, a database containing detailed information on sequenced nuclear genes coding for mitochondrial proteins in Metazoa. The MitoNuc database can be retrieved through SRS and is available via the web site http://bighost.area.ba.cnr.it/mitochondriome where other mitochondrial databases developed by our group, the complete list of the sequenced mitochondrial genomes, links to other mitochondrial sites and related information, are available. The MitoAln database, related to MitoNuc in the previous release, reporting the multiple alignments of the relevant homologous protein coding regions, is no longer supported in the present release. In order to keep the links among entries in MitoNuc from homologous proteins, a new field in the database has been defined: the cluster identifier, an alpha numeric code used to identify each cluster of homologous proteins. A comment field derived from the corresponding SWISS-PROT entry has been introduced; this reports clinical data related to dysfunction of the protein. The logic scheme of MitoNuc database has been implemented in the ORACLE DBMS. This will allow the end-users to retrieve data through a friendly interface that will be soon implemented.

  17. Design and development of a web-based application for diabetes patient data management.

    PubMed

    Deo, S S; Deobagkar, D N; Deobagkar, Deepti D

    2005-01-01

    A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.

  18. LHCb Conditions database operation assistance systems

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  19. Meta-All: a system for managing metabolic pathway information.

    PubMed

    Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H

    2006-10-23

    Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at http://bic-gh.de/meta-all and can be downloaded free of charge and installed locally.

  20. Meta-All: a system for managing metabolic pathway information

    PubMed Central

    Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H

    2006-01-01

    Background Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. Results We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. Conclusion META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at and can be downloaded free of charge and installed locally. PMID:17059592

  1. Managing Content in a Matter of Minutes

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA software created to help scientists expeditiously search and organize their research documents is now aiding compliance personnel, law enforcement investigators, and the general public in their efforts to search, store, manage, and retrieve documents more efficiently. Developed at Ames Research Center, NETMARK software was designed to manipulate vast amounts of unstructured and semi-structured NASA documents. NETMARK is both a relational and object-oriented technology built on an Oracle enterprise-wide database. To ensure easy user access, Ames constructed NETMARK as a Web-enabled platform utilizing the latest in Internet technology. One of the significant benefits of the program was its ability to store and manage mission-critical data.

  2. Rule-based statistical data mining agents for an e-commerce application

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar

    2003-03-01

    Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.

  3. Provenance Store Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Gibson, Tara D.; Schuchardt, Karen L.

    2008-03-01

    Requirements for the provenance store and access API are developed. Existing RDF stores and APIs are evaluated against the requirements and performance benchmarks. The team’s conclusion is to use MySQL as a database backend, with a possible move to Oracle in the near-term future. Both Jena and Sesame’s APIs will be supported, but new code will use the Jena API

  4. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315

  5. BioWarehouse: a bioinformatics database warehouse toolkit.

    PubMed

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  6. A Web-Based Database for Nurse Led Outreach Teams (NLOT) in Toronto.

    PubMed

    Li, Shirley; Kuo, Mu-Hsing; Ryan, David

    2016-01-01

    A web-based system can provide access to real-time data and information. Healthcare is moving towards digitizing patients' medical information and securely exchanging it through web-based systems. In one of Ontario's health regions, Nurse Led Outreach Teams (NLOT) provide emergency mobile nursing services to help reduce unnecessary transfers from long-term care homes to emergency departments. Currently the NLOT team uses a Microsoft Access database to keep track of the health information on the residents that they serve. The Access database lacks scalability, portability, and interoperability. The objective of this study is the development of a web-based database using Oracle Application Express that is easily accessible from mobile devices. The web-based database will allow NLOT nurses to enter and access resident information anytime and from anywhere.

  7. Design of Experiment and Analysis for the Joint Dynamic Allocation of Fires and Sensors (JDAFS) Simulation

    DTIC Science & Technology

    2007-06-01

    other databases such as MySQL , Oracle , and Derby will be added to future versions of the program. Setting a factor requires more than changing a single...Non-Penetrating vs . Penetrating Results.............106 a. Coverage...Interaction Profile for D U-2 and C RQ-4 .......................................................89 Figure 59. R-Squared vs . Number of Regression Tree

  8. Call-Center Based Disease Management of Pediatric Asthmatics

    DTIC Science & Technology

    2005-04-01

    study locations. Purchase peak flow meters. Prepare and reproduce patient education materials, and informed consent work sheets. Contract Oracle data...identified. Electronic peak flow meters have been purchased. Patient education materials and informed consent documents have been reproduced. A web-based...Research Center * Study population identified via military and Foundation Health databases * Electronic peak flow meters purchased * Patient education materials

  9. Software Process Automation: Experiences from the Trenches.

    DTIC Science & Technology

    1996-07-01

    Integration of problem database Weaver tions) J Process WordPerfect, All-in-One, Oracle, CM Integration of tools Weaver System K Process Framemaker , CM...handle change requests and problem reports. * Autoplan, a project management tool * Framemaker , a document processing system * Worldview, a document...Cadre, Team Work, FrameMaker , some- thing for requirements traceability, their own homegrown scheduling tool, and their own homegrown tool integrator

  10. The PROTICdb database for 2-DE proteomics.

    PubMed

    Langella, Olivier; Zivy, Michel; Joets, Johann

    2007-01-01

    PROTICdb is a web-based database mainly designed to store and analyze plant proteome data obtained by 2D polyacrylamide gel electrophoresis (2D PAGE) and mass spectrometry (MS). The goals of PROTICdb are (1) to store, track, and query information related to proteomic experiments, i.e., from tissue sampling to protein identification and quantitative measurements; and (2) to integrate information from the user's own expertise and other sources into a knowledge base, used to support data interpretation (e.g., for the determination of allelic variants or products of posttranslational modifications). Data insertion into the relational database of PROTICdb is achieved either by uploading outputs from Mélanie, PDQuest, IM2d, ImageMaster(tm) 2D Platinum v5.0, Progenesis, Sequest, MS-Fit, and Mascot software, or by filling in web forms (experimental design and methods). 2D PAGE-annotated maps can be displayed, queried, and compared through the GelBrowser. Quantitative data can be easily exported in a tabulated format for statistical analyses with any third-party software. PROTICdb is based on the Oracle or the PostgreSQLDataBase Management System (DBMS) and is freely available upon request at http://cms.moulon.inra.fr/content/view/14/44/.

  11. Neural substrates of embodied natural beauty and social endowed beauty: An fMRI study.

    PubMed

    Zhang, Wei; He, Xianyou; Lai, Siyan; Wan, Juan; Lai, Shuxian; Zhao, Xueru; Li, Darong

    2017-08-02

    What are the neural mechanisms underlying beauty based on objective parameters and beauty based on subjective social construction? This study scanned participants with fMRI while they performed aesthetic judgments on concrete pictographs and abstract oracle bone scripts. Behavioral results showed both pictographs and oracle bone scripts were judged to be more beautiful when they referred to beautiful objects and positive social meanings, respectively. Imaging results revealed regions associated with perceptual, cognitive, emotional and reward processing were commonly activated both in beautiful judgments of pictographs and oracle bone scripts. Moreover, stronger activations of orbitofrontal cortex (OFC) and motor-related areas were found in beautiful judgments of pictographs, whereas beautiful judgments of oracle bone scripts were associated with putamen activity, implying stronger aesthetic experience and embodied approaching for beauty were elicited by the pictographs. In contrast, only visual processing areas were activated in the judgments of ugly pictographs and negative oracle bone scripts. Results provide evidence that the sense of beauty is triggered by two processes: one based on the objective parameters of stimuli (embodied natural beauty) and the other based on the subjective social construction (social endowed beauty).

  12. Questioning ORACLE: An Assessment of ORACLE's Analysis of Teachers' Questions and [A Comment on "Questioning ORACLE"].

    ERIC Educational Resources Information Center

    Scarth, John; And Others

    1986-01-01

    Analysis of teachers' questions, part of the ORACLE (Observation Research and Classroom Learning Evaluation) project research, is examined in detail. Scarth and Hammersley argue that the rules ORACLE uses for identifying different types of questions involve levels of ambiguity and inference that threaten reliability and validity of the study's…

  13. Octree-based indexing for 3D pointclouds within an Oracle Spatial DBMS

    NASA Astrophysics Data System (ADS)

    Schön, Bianca; Mosa, Abu Saleh Mohammad; Laefer, Debra F.; Bertolotto, Michela

    2013-02-01

    A large proportion of today's digital datasets have a spatial component. The effective storage and management of which poses particular challenges, especially with light detection and ranging (LiDAR), where datasets of even small geographic areas may contain several hundred million points. While in the last decade 2.5-dimensional data were prevalent, true 3-dimensional data are increasingly commonplace via LiDAR. They have gained particular popularity for urban applications including generation of city-scale maps, baseline data disaster management, and utility planning. Additionally, LiDAR is commonly used for flood plane identification, coastal-erosion tracking, and forest biomass mapping. Despite growing data availability, current spatial information systems do not provide suitable full support for the data's true 3D nature. Consequently, one system is needed to store the data and another for its processing, thereby necessitating format transformations. The work presented herein aims at a more cost-effective way for managing 3D LiDAR data that allows for storage and manipulation within a single system by enabling a new index within existing spatial database management technology. Implementation of an octree index for 3D LiDAR data atop Oracle Spatial 11g is presented, along with an evaluation showing up to an eight-fold improvement compared to the native Oracle R-tree index.

  14. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  15. Suggestions for Documenting SOA-Based Systems

    DTIC Science & Technology

    2010-09-01

    Number FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and...understandability and fo even across an enterprise. Technical reference models (see F (e.g., Oracle database managemen general in nature, and they typica...architectural pattern. CMU/SEI-2010- T Key Aspects of the Architecture unicate something that is important to the stakeholders intaining the system

  16. NASA Electronic Library System (NELS) optimization

    NASA Technical Reports Server (NTRS)

    Pribyl, William L.

    1993-01-01

    This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.

  17. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    PubMed

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  18. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    PubMed

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  19. Use of Graph Database for the Integration of Heterogeneous Biological Data

    PubMed Central

    Yoon, Byoung-Ha; Kim, Seon-Kyu

    2017-01-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data. PMID:28416946

  20. MAGIC database and interfaces: an integrated package for gene discovery and expression.

    PubMed

    Cordonnier-Pratt, Marie-Michèle; Liang, Chun; Wang, Haiming; Kolychev, Dmitri S; Sun, Feng; Freeman, Robert; Sullivan, Robert; Pratt, Lee H

    2004-01-01

    The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC) Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs), and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.

  1. A cognitive network for oracle bone characters related to animals

    NASA Astrophysics Data System (ADS)

    Dress, Andreas; Grünewald, Stefan; Zeng, Zhenbing

    2016-01-01

    In this paper, we present an analysis of oracle bone characters for animals from a “cognitive” point of view. After some general remarks on oracle-bone characters presented in Sec. 1 and a short outline of the paper in Sec. 2, we collect various oracle-bone characters for animals from published resources in Sec. 3. In the next section, we begin analyzing a group of 60 ancient animal characters from www.zdic.net, a highly acclaimed internet dictionary of Chinese characters that is strictly based on historical sources, and introduce five categories of specific features regarding their (graphical) structure that will be used in Sec. 5 to associate corresponding feature vectors to these characters. In Sec. 6, these feature vectors will be used to investigate their dissimilarity in terms of a family of parameterized distance measures. And in the last section, we apply the SplitsTree method as encoded in the NeighborNet algorithms to construct a corresponding family of dissimilarity-based networks with the intention of elucidating how the ancient Chinese might have perceived the “animal world” in the late bronze age and to demonstrate that these pictographs reflect an intuitive understanding of this world and its inherent structure that predates its classification in the oldest surviving Chinese encyclopedia from approximately the third century BC, the Er Ya, as well as similar classification systems in the West by one to two millennia. We also present an English dictionary of 70 oracle bone characters for animals in Appendix A. In Appendix B, we list various variants of animal characters that were published in the Jia Gu Wen Bian (cf. 甲骨文编, A Complete Collection of Oracle Bone Characters, edited by the Institute of Archaeology of the Chinese Academy of Social Sciences, published by the Zhonghua Book Company in 1965). We recall the frequencies of the 521 most frequent oracle bone characters in Appendix C as reported in [T. Chen, Yin-Shang Jiaguwen Zixing Xitong Zai Yanjiu, (The Structural System of Oracle Inscriptions) (Shanghai Renmin Chubanshe, Shanghai, 2010); Jiaguwen Shiwen Yongzi Pinlü Biao (A Frequency List of Oracle Characters), Center for the Study and Application of Chinese Characters (East China Normal University, Shanghai, 2010), http://www.wenzi.cn/en/default.aspx. And in Appendix D, we list the animals registered in the last five chapters of the Er Ya.

  2. Cloud Computing Trace Characterization and Synthetic Workload Generation

    DTIC Science & Technology

    2013-03-01

    measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps

  3. Evaluation of Marine Corps Manpower Computer Simulation Model

    DTIC Science & Technology

    2016-12-01

    merit- based promotion selection that is in conjunction with the “up or out” manpower system. To ensure mission accomplishment within M&RA, it is...historical data the MSM pulls from an online Oracle database. Two types of data base pulls occur here: acquiring historical data of manpower pyramid...is based off of the assumption that the historical manpower progression is constant, and therefore is controllable. This unfortunately does not marry

  4. A WebGIS system on the base of satellite data processing system for marine application

    NASA Astrophysics Data System (ADS)

    Gong, Fang; Wang, Difeng; Huang, Haiqing; Chen, Jianyu

    2007-10-01

    From 2002 to 2004, a satellite data processing system for marine application had been built up in State Key Laboratory of Satellite Ocean Environment Dynamics (Second Institute of Oceanography, State Oceanic Administration). The system received satellite data from TERRA, AQUA, NOAA-12/15/16/17/18, FY-1D and automatically generated Level3 products and Level4 products(products of single orbit and merged multi-orbits products) deriving from Level0 data, which is controlled by an operational control sub-system. Currently, the products created by this system play an important role in the marine environment monitoring, disaster monitoring and researches. Now a distribution platform has been developed on this foundation, namely WebGIS system for querying and browsing of oceanic remote sensing data. This system is based upon large database system-Oracle. We made use of the space database engine of ArcSDE and other middleware to perform database operation in addition. J2EE frame was adopted as development model, and Oracle 9.2 DBMS as database background and server. Simply using standard browsers(such as IE6.0), users can visit and browse the public service information that provided by system, including browsing for oceanic remote sensing data, and enlarge, contract, move, renew, traveling, further data inquiry, attribution search and data download etc. The system is still under test now. Founding of such a system will become an important distribution platform of Chinese satellite oceanic environment products of special topic and category (including Sea surface temperature, Concentration of chlorophyll, and so on), for the exaltation of satellite products' utilization and promoting the data share and the research of the oceanic remote sensing platform.

  5. Adapting My Weather Impacts Decision Aid (MyWIDA) to Additional Web Application Server Technologies

    DTIC Science & Technology

    2015-08-01

    Oracle. Further, some datatypes that are used with HSQLDB are not compatible with Oracle. Most notably, the VARBINARY datatype is not compatible...with Oracle, and instead was replaced with the Binary Large Object (BLOB) datatype . Oracle additionally has a different implementation of primary keys

  6. Planning the data transition of a VLDB: a case study

    NASA Astrophysics Data System (ADS)

    Finken, Shirley J.

    1997-02-01

    This paper describes the technical and programmatic plans for moving and checking certain data from the IDentification Automated Services (IDAS) system to the new Interstate Identification Index/Federal Bureau of Investigation (III/FBI) Segment database--one of the three components of the Integrated Automated Fingerprint Identification System (IAFIS) being developed by the Federal Bureau of Investigation, Criminal Justice Information Services Division. Transitioning IDAS to III/FBI includes putting the data into an entirely new target database structure (i.e. from IBM VSAM files to ORACLE7 RDBMS tables). Only four IDAS files were transitioned (CCN, CCR, CCA, and CRS), but their total estimated size is at 500 Gb of data. Transitioning of this Very Large Database is planned as two processes.

  7. Real-Time Payload Control and Monitoring on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)

    1998-01-01

    World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.

  8. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure

    NASA Astrophysics Data System (ADS)

    Antolik, L.; Shiro, B.; Friberg, P. A.

    2016-12-01

    The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.

  9. User's Manual for the Naval Interactive Data Analysis System-Climatologies (NIDAS-C), Version 2.0

    NASA Technical Reports Server (NTRS)

    Abbott, Clifton

    1996-01-01

    This technical note provides the user's manual for the NIDAS-C system developed for the naval oceanographic office. NIDAS-C operates using numerous oceanographic data categories stored in an installed version of the Naval Environmental Operational Nowcast System (NEONS), a relational database management system (rdbms) which employs the ORACLE proprietary rdbms engine. Data management, configuration, and control functions for the supporting rdbms are performed externally. NIDAS-C stores and retrieves data to/from the rdbms but exercises no direct internal control over the rdbms or its configuration. Data is also ingested into the rdbms, for use by NIDAS-C, by external data acquisition processes. The data categories employed by NIDAS-C are as follows: Bathymetry - ocean depth at

  10. Integrating Oracle Human Resources with Other Modules

    NASA Technical Reports Server (NTRS)

    Sparks, Karl; Shope, Shawn

    1998-01-01

    One of the most challenging aspects of implementing an enterprise-wide business system is achieving integration of the different modules to the satisfaction of diverse customers. The Jet Propulsion Laboratory's (JPL) implementation of the Oracle application suite demonstrates the need to coordinate Oracle Human Resources Management System (HRMS) decision across the Oracle modules.

  11. Study on Big Database Construction and its Application of Sample Data Collected in CHINA'S First National Geographic Conditions Census Based on Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.

    2018-04-01

    In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.

  12. Quantum partial search for uneven distribution of multiple target items

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Korepin, Vladimir

    2018-06-01

    Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.

  13. LETTER TO THE EDITOR: Optimization of partial search

    NASA Astrophysics Data System (ADS)

    Korepin, Vladimir E.

    2005-11-01

    A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm.

  14. Correlation analyses of phytochemical composition, chemical, and cellular measures of antioxidant activity of broccoli (Brassica oleracea L. Var. italica).

    PubMed

    Eberhardt, Marian V; Kobira, Kanta; Keck, Anna-Sigrid; Juvik, John A; Jeffery, Elizabeth H

    2005-09-21

    Chemical measures of antioxidant activity within the plant, such as the oxygen radical absorbance capacity (ORAC) assay, have been reported for many plant-based foods. However, the extent to which chemical measures relate to cellular measures of oxidative stress is unclear. The natural variation in the phytochemical content of 22 broccoli genotypes was used to determine correlations among chemical composition (carotenoids, tocopherols and polyphenolics), chemical antioxidant activity (ORAC), and measures of cellular antioxidation [prevention of DNA oxidative damage and of oxidation of the biomarker dichlorofluorescein (DCFH) in HepG2 cells] using hydrophilic and lipophilic extracts of broccoli. For lipophilic extracts, ORAC (ORAC-L) correlated with inhibition of cellular oxidation of DCFH (DCFH-L, r = 0.596, p = 0.006). Also, DNA damage in the presence of the lipophilic extract was negatively correlated with both chemical and cellular measures of antioxidant activity as measured by ORAC-L (r = -0.705, p = 0.015) and DCFH-L (r = -0.671, p = 0.048), respectively. However, no correlations were observed for hydrophilic (-H) extracts, except between polyphenol content and ORAC (ORAC-H; r = 0.778, p < 0.001). Inhibition of cellular oxidation by hydrophilic extracts (DCFH-H) and ORAC-H were approximately 8- and 4-fold greater than DCFH-L and ORAC-L, respectively. Whether ORAC-H has more biological relevance than ORAC-L because of its magnitude or whether ORAC-L bears more biological relevance because it relates to cellular estimates of antioxidant activity remains to be determined. Chemical estimates of antioxidant capacity within the plant may not accurately reflect the complex nature of the full antioxidant activity of broccoli extracts within cells.

  15. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  16. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  17. Exploring the Cost and Functionality of MEDCOM Web Services

    DTIC Science & Technology

    2005-10-24

    Software Name 24. What backend database software supports your intranet/Internet content? (check all that apply)-. o Oracle o Microsoft SQL Server E0...Department of Defense (DoD) service branches, which funded and deployed an Internet portal, TRICARE Online, to serve as an information conduit between the...public website, the information contained on the intranet is traditionally limited to the members of the hosting command. The local information serves as

  18. Software Support Measurement and Estimating for Oracle Database Applications Using Mark II Function Points

    DTIC Science & Technology

    1992-12-01

    36 V.33. Coe ncint of De minstioi ........................ 37 V3A. F-Raio .................................... 37 V3.5... de ations. Instructions ae defined as lines of code or card images. Thus, a line containin two or mome souce statements counts as one instruction; a...understand the productivity paradox, recall de concept of virtual machines. When a higher level machine groups ogether many instructm of a lower level

  19. NASA Interactive Forms Type Interface - NIFTI

    NASA Technical Reports Server (NTRS)

    Jain, Bobby; Morris, Bill

    2005-01-01

    A flexible database query, update, modify, and delete tool was developed that provides an easy interface to Oracle forms. This tool - the NASA interactive forms type interface, or NIFTI - features on-the- fly forms creation, forms sharing among users, the capability to query the database from user-entered criteria on forms, traversal of query results, an ability to generate tab-delimited reports, viewing and downloading of reports to the user s workstation, and a hypertext-based help system. NIFTI is a very powerful ad hoc query tool that was developed using C++, X-Windows by a Motif application framework. A unique tool, NIFTI s capabilities appear in no other known commercial-off-the- shelf (COTS) tool, because NIFTI, which can be launched from the user s desktop, is a simple yet very powerful tool with a highly intuitive, easy-to-use graphical user interface (GUI) that will expedite the creation of database query/update forms. NIFTI, therefore, can be used in NASA s International Space Station (ISS) as well as within government and industry - indeed by all users of the widely disseminated Oracle base. And it will provide significant cost savings in the areas of user training and scalability while advancing the art over current COTS browsers. No COTS browser performs all the functions NIFTI does, and NIFTI is easier to use. NIFTI s cost savings are very significant considering the very large database with which it is used and the large user community with varying data requirements it will support. Its ease of use means that personnel unfamiliar with databases (e.g., managers, supervisors, clerks, and others) can develop their own personal reports. For NASA, a tool such as NIFTI was needed to query, update, modify, and make deletions within the ISS vehicle master database (VMDB), a repository of engineering data that includes an indentured parts list and associated resource data (power, thermal, volume, weight, and the like). Since the VMDB is used both as a collection point for data and as a common repository for engineering, integration, and operations teams, a tool such as NIFTI had to be designed that could expedite the creation of database query/update forms which could then be shared among users.

  20. Stationary states in quantum walk search

    NASA Astrophysics Data System (ADS)

    PrÅ«sis, Krišjānis; Vihrovs, Jevgěnijs; Wong, Thomas G.

    2016-09-01

    When classically searching a database, having additional correct answers makes the search easier. For a discrete-time quantum walk searching a graph for a marked vertex, however, additional marked vertices can make the search harder by causing the system to approximately begin in a stationary state, so the system fails to evolve. In this paper, we completely characterize the stationary states, or 1-eigenvectors, of the quantum walk search operator for general graphs and configurations of marked vertices by decomposing their amplitudes into uniform and flip states. This infinitely expands the number of known stationary states and gives an optimization procedure to find the stationary state closest to the initial uniform state of the walk. We further prove theorems on the existence of stationary states, with them conditionally existing if the marked vertices form a bipartite connected component and always existing if nonbipartite. These results utilize the standard oracle in Grover's algorithm, but we show that a different type of oracle prevents stationary states from interfering with the search algorithm.

  1. ADAMS: AIRLAB data management system user's guide

    NASA Technical Reports Server (NTRS)

    Conrad, C. L.; Ingogly, W. F.; Lauterbach, L. A.

    1986-01-01

    The AIRLAB Data Management System (ADAMS) is an online environment that supports research at NASA's AIRLAB. ADAMS provides an easy to use interactive interface that eases the task of documenting and managing information about experiments and improves communication among project members. Data managed by ADAMS includes information about experiments, data sets produced, software and hardware available in AIRLAB as well as that used in a particular experiment, and an on-line engineer's notebook. The User's Guide provides an overview of the ADAMS system as well as details of the operations available within ADAMS. A tutorial section takes the user step-by-step through a typical ADAMS session. ADAMS runs under the VAX/VMS operating system and uses the ORACLE database management system and DEC/FMS (the Forms Management System). ADAMS can be run from any VAX connected via DECnet to the ORACLE host VAX. The ADAMS system is designed for simplicity, so interactions within the underlying data management system and communications network are hidden from the user.

  2. 76 FR 354 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-04

    ..., January 11, 2011. Docket Numbers: ER11-2436-000. Applicants: Oracle Energy Services, LLC. Description: Oracle Energy Services, LLC submits tariff filing per 35.12: Oracle Energy Services, LLC Initial Market...

  3. Data Processing on Database Management Systems with Fuzzy Query

    NASA Astrophysics Data System (ADS)

    Şimşek, Irfan; Topuz, Vedat

    In this study, a fuzzy query tool (SQLf) for non-fuzzy database management systems was developed. In addition, samples of fuzzy queries were made by using real data with the tool developed in this study. Performance of SQLf was tested with the data about the Marmara University students' food grant. The food grant data were collected in MySQL database by using a form which had been filled on the web. The students filled a form on the web to describe their social and economical conditions for the food grant request. This form consists of questions which have fuzzy and crisp answers. The main purpose of this fuzzy query is to determine the students who deserve the grant. The SQLf easily found the eligible students for the grant through predefined fuzzy values. The fuzzy query tool (SQLf) could be used easily with other database system like ORACLE and SQL server.

  4. Software Assurance Measurement -- State of the Practice

    DTIC Science & Technology

    2013-11-01

    quality and productivity. 30+ languages, C/C++, Java , .NET, Oracle, PeopleSoft, SAP, Siebel, Spring, Struts, Hibernate , and all major databases. ChecKing...NET 39 ActionScript 39 Ada 40 C/C++ 40 Java 41 JavaScript 42 Objective-C 42 Opa 42 Packages 42 Perl 42 PHP 42 Python 42 Formal Methods...Suite—A tool for Ada, C, C++, C#, and Java code that comprises various analyses such as architecture checking, interface analyses, and clone detection

  5. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  6. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  7. [The historical materials of stomatology in the oracle bone inscriptions of the Yin-Shang Dynasties].

    PubMed

    Li, Xiaojun; Zhu, Lang

    2015-07-01

    Some oracle bone inscriptions of the Yin-Shang Dynasties were related to the stomatology, including special terms of diseases of the mouth, tongue and teeth which were classified, and proper nouns of some special diseases. Moreover, witch doctors' exploration for the causes of oral diseases, the observation on different stages of oral diseases, and the records of oral disease treatment were also involved. All of these reflected the sprouting stage of stomatology in the Yin-Shang Dynasties in ancient China.

  8. PylotDB - A Database Management, Graphing, and Analysis Tool Written in Python

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-04

    PylotDB, written completely in Python, provides a user interface (UI) with which to interact with, analyze, graph data from, and manage open source databases such as MySQL. The UI mitigates the user having to know in-depth knowledge of the database application programming interface (API). PylotDB allows the user to generate various kinds of plots from user-selected data; generate statistical information on text as well as numerical fields; backup and restore databases; compare database tables across different databases as well as across different servers; extract information from any field to create new fields; generate, edit, and delete databases, tables, and fields;more » generate or read into a table CSV data; and similar operations. Since much of the database information is brought under control of the Python computer language, PylotDB is not intended for huge databases for which MySQL and Oracle, for example, are better suited. PylotDB is better suited for smaller databases that might be typically needed in a small research group situation. PylotDB can also be used as a learning tool for database applications in general.« less

  9. CRYSTMET—The NRCC Metals Crystallographic Data File

    PubMed Central

    Wood, Gordon H.; Rodgers, John R.; Gough, S. Roger; Villars, Pierre

    1996-01-01

    CRYSTMET is a computer-readable database of critically evaluated crystallographic data for metals (including alloys, intermetallics and minerals) accompanied by pertinent chemical, physical and bibliographic information. It currently contains about 60 000 entries and covers the literature exhaustively from 1913. Scientific editing of the abstracted entries, consisting of numerous automated and manual checks, is done to ensure consistency with related, previously published studies, to assign structure types where necessary and to help guarantee the accuracy of the data and related information. Analyses of the entries and their distribution across key journals as a function of time show interesting trends in the complexity of the compounds studied as well as in the elements they contain. Two applications of CRYSTMET are the identification of unknowns and the prediction of properties of materials. CRYSTMET is available either online or via license of a private copy from the Canadian Scientific Numeric Database Service (CAN/SND). The indexed online search and analysis system is easy and economical to use yet fast and powerful. Development of a new system is under way combining the capabilities of ORACLE with the flexibility of a modern interface based on the Netscape browsing tool. PMID:27805157

  10. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    PubMed

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  11. Multimedia explorer: image database, image proxy-server and search-engine.

    PubMed Central

    Frankewitsch, T.; Prokosch, U.

    1999-01-01

    Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463

  12. Multimedia explorer: image database, image proxy-server and search-engine.

    PubMed

    Frankewitsch, T; Prokosch, U

    1999-01-01

    Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.

  13. Efficiently Distributing Component-Based Applications Across Wide-Area Environments

    DTIC Science & Technology

    2002-01-01

    Oracle 8.1.7 Enterprise Edition), each running on a dedicated 1GHz dual-processor Pentium III workstation. For the RUBiS tests, we used a MySQL 4.0.12...a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...Beans Catalog Handles read-only queries to product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart

  14. Ground-water conditions between Oracle and Oracle Junction, Pinal County, Arizona

    USGS Publications Warehouse

    Heindl, L.A.

    1955-01-01

    The development of the San Manuel copper prospect has greatly increased traffic along State Highway 77. Considerable interest in commercial possibilities along that road has resulted in a request by the Arizona State Land Department for information about the ground-water conditions between Oracle and Oracle Junction. This request came too late for information to be included in a recently completed memorandum report on the occurrence of ground water in the vicinity of Oracle, released in February 1955. These data are presented as a supplement to that report to minimized duplication of statements about the general geologic and hydrologic conditions. The necessary well data and sample descriptions that were not included in the Oracle report are shown in tables 3 and 4. The area discussed in this supplement comprises parts of Tps. 9 and 10 S., Rs. 13, 14, and 15 E., and includes about 90 square miles (fig. 3). The eastern portion overlaps part of the area covered by the earlier report.

  15. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  16. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  17. Operational Experience with the Frontier System in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter

    2012-06-20

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less

  18. The research and implementation of coalfield spontaneous combustion of carbon emission WebGIS based on Silverlight and ArcGIS server

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Bi, J.; Wang, X.; Zhu, W.

    2014-02-01

    As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data.

  19. Operational Experience with the Frontier System in CMS

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen

    2012-12-01

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.

  20. 75 FR 82381 - Oracle Energy Services, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-30

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-2436-000] Oracle Energy Services, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... proceeding of Oracle Energy Services, LLC's application for market-based rate authority, with an accompanying...

  1. Oracle, a novel PDZ-LIM domain protein expressed in heart and skeletal muscle.

    PubMed

    Passier, R; Richardson, J A; Olson, E N

    2000-04-01

    In order to identify novel genes enriched in adult heart, we performed a subtractive hybridization for genes expressed in mouse heart but not in skeletal muscle. We identified two alternative splicing variants of a novel PDZ-LIM domain protein, which we named Oracle. Both variants contain a PDZ domain at the amino-terminus and three LIM domains at the carboxy-terminus. Highest homology of Oracle was found with the human and rat enigma proteins in the PDZ domain (62 and 61%, respectively) and in the LIM domains (60 and 69%, respectively). By Northern hybridization analysis, we showed that expression is highest in adult mouse heart, low in skeletal muscle and undetectable in other adult mouse tissues. In situ hybridization in mouse embryos confirmed and extended these data by showing high expression of Oracle mRNA in atrial and ventricular myocardial cells from E8.5. From E9.5 low expression of Oracle mRNA was detectable in myotomes. These data suggest a role for Oracle in the early development and function of heart and skeletal muscle.

  2. An overview on integrated data system for archiving and sharing marine geology and geophysical data in Korea Institute of Ocean Science & Technology (KIOST)

    NASA Astrophysics Data System (ADS)

    Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa

    2016-04-01

    We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.

  3. NYC Reservoirs Watershed Areas (HUC 12)

    EPA Pesticide Factsheets

    This NYC Reservoirs Watershed Areas (HUC 12) GIS layer was derived from the 12-Digit National Watershed Boundary Database (WBD) at 1:24,000 for EPA Region 2 and Surrounding States. HUC 12 polygons were selected from the source based on interactively comparing these HUC 12s in our GIS with images of the New York City's Water Supply System Map found at http://www.nyc.gov/html/dep/html/drinking_water/wsmaps_wide.shtml. The 12 digit Hydrologic Units (HUCs) for EPA Region 2 and surrounding states (Northeastern states, parts of the Great Lakes, Puerto Rico and the USVI) are a subset of the National Watershed Boundary Database (WBD), downloaded from the Natural Resources Conservation Service (NRCS) Geospatial Gateway and imported into the EPA Region 2 Oracle/SDE database. This layer reflects 2009 updates to the WBD that included new boundary data for New York and New Jersey.

  4. LHCb experience with LFC replication

    NASA Astrophysics Data System (ADS)

    Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.

    2008-07-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.

  5. Aerosol-Radiation-Cloud Interactions in the South-East Atlantic: Results from the ORACLES-2016 Deployment and a First Look at ORACLES-2017 and Beyond

    NASA Technical Reports Server (NTRS)

    Redemann, Jens; Wood, R.; Zuidema, P.

    2018-01-01

    Seasonal biomass burning (BB) in Southern Africa during the Southern hemisphere spring produces almost a third of the Earth's BB aerosol particles. These particles are lofted into the mid-troposphere and transported westward over the South-East (SE) Atlantic, where they interact with one of the three semi-permanent subtropical stratocumulus (Sc) cloud decks in the world. These interactions include adjustments to aerosol-induced solar heating and microphysical effects. The representation of these interactions in climate models remains highly uncertain, because of the scarcity of observational constraints on both, the aerosol and cloud properties, and the governing physical processes. The first deployment of the NASA P-3 and ER-2 aircraft in the ORACLES (ObseRvations of Aerosols Above Clouds and Their IntEractionS) project in August/September of 2016 has started to fill this observational gap by providing an unprecedented look at the SE Atlantic cloud-aerosol system. We provide an overview of the first deployment, highlighting aerosol absorptive and cloud-nucleating properties, their vertical distribution relative to clouds, the locations and degree of aerosol mixing into clouds, cloud changes in response to such mixing, and cloud top stability relationships to the aerosol. We also expect to describe preliminary results of the second ORACLES deployment from Sao Tome and Pri­ncipe in August 2017. We will make an initial assessment of the differences and similarities of the BB plume and cloud properties as observed from a deployment site near the plume's northern edge. We will conclude with an outlook for the third ORACLES deployment in October 2018.

  6. Development and validation of the ORACLE score to predict risk of osteoporosis.

    PubMed

    Richy, Florent; Deceulaer, Fréderic; Ethgen, Olivier; Bruyère, Olivier; Reginster, Jean-Yves

    2004-11-01

    To develop and validate a composite index, the Osteoporosis Risk Assessment by Composite Linear Estimate (ORACLE), that includes risk factors and ultrasonometric outcomes to screen for osteoporosis. Two cohorts of postmenopausal women aged 45 years and older participated in the development (n = 407) and the validation (n = 202) of ORACLE. Their bone mineral density was determined by dual energy x-ray absorptiometry and quantitative ultrasonometry (QUS), and their historical and clinical risk factors were assessed (January to June 2003). Logistic regression analysis was used to select significant predictors of bone mineral density, whereas receiver operating characteristic (ROC) analysis was used to assess the discriminatory performance of ORACLE. The final logistic regression model retained 4 biometric or historical variables and 1 ultrasonometric outcome. The ROC areas under the curves (AUCs) for ORACLE were 84% for the prediction of osteoporosis and 78% for low bone mass. A sensitivity of 90% corresponded to a specificity of 50% for identification of women at risk of developing osteoporosis. The corresponding positive and negative predictive values were 86% and 54%, respectively, in the development cohort. In the validation cohort, the AUCs for identification of osteoporosis and low bone mass were 81% and 76% for ORACLE, 69% and 64% for QUS T score, 71% and 68% for QUS ultrasonometric bone profile index, and 76% and 75% for Osteoporosis Self-assessment Tool, respectively. ORACLE had the best discriminatory performance in identifying osteoporosis compared with the other approaches (P < .05). ORACLE exhibited the highest discriminatory properties compared with ultrasonography alone or other previously validated risk indices. It may be helpful to enhance the predictive value of QUS.

  7. Clinical engineering department strategic graphical dashboard to enhance maintenance planning and asset management.

    PubMed

    Sloane, Elliot; Rosow, Eric; Adam, Joe; Shine, Dave

    2005-01-01

    The Clinical Engineering (a.k.a. Biomedical Engineering) Department has heretofore lagged in adoption of some of the leading-edge information system tools used in other industries. This present application is part of a DOD-funded SBIR grant to improve the overall management of medical technology, and describes the capabilities that Strategic Graphical Dashboards (SGDs) can afford. This SGD is built on top of an Oracle database, and uses custom-written graphic objects like gauges, fuel tanks, and Geographic Information System (GIS) maps to improve and accelerate decision making.

  8. MitBASE : a comprehensive and integrated mitochondrial DNA database. The present status

    PubMed Central

    Attimonelli, M.; Altamura, N.; Benne, R.; Brennicke, A.; Cooper, J. M.; D’Elia, D.; Montalvo, A. de; Pinto, B. de; De Robertis, M.; Golik, P.; Knoop, V.; Lanave, C.; Lazowska, J.; Licciulli, F.; Malladi, B. S.; Memeo, F.; Monnerot, M.; Pasimeni, R.; Pilbout, S.; Schapira, A. H. V.; Sloof, P.; Saccone, C.

    2000-01-01

    MitBASE is an integrated and comprehensive database of mitochondrial DNA data which collects, under a single interface, databases for Plant, Vertebrate, Invertebrate, Human, Protist and Fungal mtDNA and a Pilot database on nuclear genes involved in mitochondrial biogenesis in Saccharomyces cerevisiae. MitBASE reports all available information from different organisms and from intraspecies variants and mutants. Data have been drawn from the primary databases and from the literature; value adding information has been structured, e.g., editing information on protist mtDNA genomes, pathological information for human mtDNA variants, etc. The different databases, some of which are structured using commercial packages (Microsoft Access, File Maker Pro) while others use a flat-file format, have been integrated under ORACLE. Ad hoc retrieval systems have been devised for some of the above listed databases keeping into account their peculiarities. The database is resident at the EBI and is available at the following site: http://www3.ebi.ac.uk/Research/Mitbase/mitbase.pl . The impact of this project is intended for both basic and applied research. The study of mitochondrial genetic diseases and mitochondrial DNA intraspecies diversity are key topics in several biotechnological fields. The database has been funded within the EU Biotechnology programme. PMID:10592207

  9. Tomcat, Oracle & XML Web Archive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cothren, D. C.

    2008-01-01

    The TOX (Tomcat Oracle & XML) web archive is a foundation for development of HTTP-based applications using Tomcat (or some other servlet container) and an Oracle RDBMS. Use of TOX requires coding primarily in PL/SQL, JavaScript, and XSLT, but also in HTML, CSS and potentially Java. Coded in Java and PL/SQL itself, TOX provides the foundation for more complex applications to be built.

  10. The mass remote sensing image data management based on Oracle InterMedia

    NASA Astrophysics Data System (ADS)

    Zhao, Xi'an; Shi, Shaowei

    2013-07-01

    With the development of remote sensing technology, getting the image data more and more, how to apply and manage the mass image data safely and efficiently has become an urgent problem to be solved. According to the methods and characteristics of the mass remote sensing image data management and application, this paper puts forward to a new method that takes Oracle Call Interface and Oracle InterMedia to store the image data, and then takes this component to realize the system function modules. Finally, it successfully takes the VC and Oracle InterMedia component to realize the image data storage and management.

  11. Validation of a fully automated HER2 staining kit in breast cancer.

    PubMed

    Moelans, Cathy B; Kibbelaar, Robby E; van den Heuvel, Marius C; Castigliego, Domenico; de Weger, Roel A; van Diest, Paul J

    2010-01-01

    Testing for HER2 amplification and/or overexpression is currently routine practice to guide Herceptin therapy in invasive breast cancer. At present, HER2 status is most commonly assessed by immunohistochemistry (IHC). Standardization of HER2 IHC assays is of utmost clinical and economical importance. At present, HER2 IHC is most commonly performed with the HercepTest which contains a polyclonal antibody and applies a manual staining procedure. Analytical variability in HER2 IHC testing could be diminished by a fully automatic staining system with a monoclonal antibody. 219 invasive breast cancers were fully automatically stained with the monoclonal antibody-based Oracle HER2 Bond IHC kit and manually with the HercepTest. All cases were tested for amplification with chromogenic in situ hybridization (CISH). HercepTest yielded an overall sharper membrane staining, with less cytoplasmic and stromal background than Oracle in 17% of cases. Overall concordance between both IHC techniques was 89% (195/219) with a kappa value of 0.776 (95% CI 0.698-0.854), indicating a substantial agreement. Most (22/24) discrepancies between HercepTest and Oracle showed a weaker staining for Oracle. Thirteen of the 24 discrepant cases were high-level HER2 amplified by CISH, and in 12 of these HercepTest IHC better reflected gene amplification status. All the 13 HER2 amplified discrepant cases were at least 2+ by HercepTest, while 10/13 of these were at least 2+ for Oracle. Considering CISH as gold standard, sensitivity of HercepTest and Oracle was 91% and 83%, and specificity was 94% and 98%, respectively. Positive and negative predictive values for HercepTest and Oracle were 90% and 95% for HercepTest and 96% and 91% for Oracle, respectively. Fully-automated HER2 staining with the monoclonal antibody in the Oracle kit shows a high level of agreement with manual staining by the polyclonal antibody in the HercepTest. Although Oracle shows in general some more cytoplasmic staining and may be slightly less sensitive in picking up HER2 amplified cases, it shows a higher specificity and may be considered as an alternative method to evaluate the HER2 expression in breast cancer with potentially less analytical variability.

  12. Methods and implementation of a central biosample and data management in a three-centre clinical study.

    PubMed

    Angelow, Aniela; Schmidt, Matthias; Weitmann, Kerstin; Schwedler, Susanne; Vogt, Hannes; Havemann, Christoph; Hoffmann, Wolfgang

    2008-07-01

    In our report we describe concept, strategies and implementation of a central biosample and data management (CSDM) system in the three-centre clinical study of the Transregional Collaborative Research Centre "Inflammatory Cardiomyopathy - Molecular Pathogenesis and Therapy" SFB/TR 19, Germany. Following the requirements of high system resource availability, data security, privacy protection and quality assurance, a web-based CSDM was developed based on Java 2 Enterprise Edition using an Oracle database. An efficient and reliable sample documentation system using bar code labelling, a partitioning storage algorithm and an online documentation software was implemented. An online electronic case report form is used to acquire patient-related data. Strict rules for access to the online applications and secure connections are used to account for privacy protection and data security. Challenges for the implementation of the CSDM resided at project, technical and organisational level as well as at staff level.

  13. A comparison of database systems for XML-type data.

    PubMed

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  14. Globe Teachers Guide and Photographic Data on the Web

    NASA Technical Reports Server (NTRS)

    Kowal, Dan

    2004-01-01

    The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.

  15. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. IAC was developed in 1985.

  16. PMAG: Relational Database Definition

    NASA Astrophysics Data System (ADS)

    Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.

    2002-12-01

    The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the site location (latitude, longitude, elevation), geography (continent, country, region), geological setting (lithospheric plate or block, tectonic setting), geological age (age range, timescale name, stratigraphic position) and materials (rock type, classification, alteration state). Each data point and method description is also related to its peer-reviewed reference [citation ID] as archived in the EarthRef Reference Database (ERR). This guarantees direct traceability all the way to its original source, where the user can find the bibliography of each PMAG reference along with every abstract, data table, technical note and/or appendix that are available in digital form and that can be downloaded as PDF/JPEG images and Microsoft Excel/Word data files. This may help scientists and teachers in performing their research since they have easy access to all the scientific data. It also allows for checking potential errors during the digitization process. Please visit the PMAG website at http://earthref.org/PMAG/ for more information.

  17. The Challenges of Creating a Real-Time Data Management System for TRU-Mixed Waste at the Advanced Mixed Waste Treatment Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paff, S. W; Doody, S.

    2003-02-25

    This paper discusses the challenges associated with creating a data management system for waste tracking at the Advanced Mixed Waste Treatment Plant (AMWTP) at the Idaho National Engineering Lab (INEEL). The waste tracking system combines data from plant automation systems and decision points. The primary purpose of the system is to provide information to enable the plant operators and engineers to assess the risks associated with each container and determine the best method of treating it. It is also used to track the transuranic (TRU) waste containers as they move throughout the various processes at the plant. And finally, themore » goal of the system is to support paperless shipments of the waste to the Waste Isolation Pilot Plant (WIPP). This paper describes the approach, methodologies, the underlying design of the database, and the challenges of creating the Data Management System (DMS) prior to completion of design and construction of a major plant. The system was built utilizing an Oracle database platform, and Oracle Forms 6i in client-server mode. The underlying data architecture is container-centric, with separate tables and objects for each type of analysis used to characterize the waste, including real-time radiography (RTR), non-destructive assay (NDA), head-space gas sampling and analysis (HSGS), visual examination (VE) and coring. The use of separate tables facilitated the construction of automatic interfaces with the analysis instruments that enabled direct data capture. Movements are tracked using a location system describing each waste container's current location and a history table tracking the container's movement history. The movement system is designed to interface both with radio-frequency bar-code devices and the plant's integrated control system (ICS). Collections of containers or information, such as batches, were created across the various types of analyses, which enabled a single, cohesive approach to be developed for verification and validation activities. The DMS includes general system functions, including task lists, electronic signature, non-conformance reports and message systems, that cut vertically across the remaining subsystems. Oracle's security features were utilized to ensure that only authorized users were allowed to log in, and to restrict access to system functionality according to user role.« less

  18. Schema for the LANL infrasound analysis tool, infrapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dannemann, Fransiska Kate; Marcillo, Omar Eduardo

    2017-04-14

    The purpose of this document is to define the schema used for the operation of the infrasound analysis tool, infrapy. The tables described by this document extend the CSS3.0 or KB core schema to include information required for the operation of infrapy. This document is divided into three sections, the first being this introduction. Section two defines eight new, infrasonic data processing-specific database tables. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams.

  19. Remote online monitoring and measuring system for civil engineering structures

    NASA Astrophysics Data System (ADS)

    Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł

    2009-06-01

    In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.

  20. Development of a paperless, Y2K compliant exposure tracking database at Los Alamos National Laboratory.

    PubMed

    Conwell, J L; Creek, K L; Pozzi, A R; Whyte, H M

    2001-02-01

    The Industrial Hygiene and Safety Group at Los Alamos National Laboratory (LANL) developed a database application known as IH DataView, which manages industrial hygiene monitoring data. IH DataView replaces a LANL legacy system, IHSD, that restricted user access to a single point of data entry needed enhancements that support new operational requirements, and was not Year 2000 (Y2K) compliant. IH DataView features a comprehensive suite of data collection and tracking capabilities. Through the use of Oracle database management and application development tools, the system is Y2K compliant and Web enabled for easy deployment and user access via the Internet. System accessibility is particularly important because LANL operations are spread over 43 square miles, and industrial hygienists (IHs) located across the laboratory will use the system. IH DataView shows promise of being useful in the future because it eliminates these problems. It has a flexible architecture and sophisticated capability to collect, track, and analyze data in easy-to-use form.

  1. Global Combat Support System-Marine Corps Proof-of-Concept for Dashboard Analytics

    DTIC Science & Technology

    2014-12-01

    The core is modern, commercial-off-the-shelf enterprise resource planning ( ERP ) software (Oracle 11i e-Business Suite). GCSS-MCs design is focused...factor in the decision to implement this new software . GCSS-MC is the technology centerpiece of the Logistics Modernization (LogMod) Program...GCSS-MC is based on the implementation of Oracle e-Business Suite 11i as the core software package. This is the same infrastructure that Oracle

  2. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  3. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  4. State of the art techniques for preservation and reuse of hard copy electrocardiograms.

    PubMed

    Lobodzinski, Suave M; Teppner, Ulrich; Laks, Michael

    2003-01-01

    Baseline examinations and periodic reexaminations in longitudinal population studies, together with ongoing surveillance for morbidity and mortality, provide unique opportunities for seeking ways to enhance the value of electrocardiography (ECG) as an inexpensive and noninvasive tool for prognosis and diagnosis. We used newly developed optical ECG waveform recognition (OEWR) technique capable of extracting raw waveform data from legacy hard copy ECG recording. Hardcopy ECG recordings were scanned and processed by the OEWR algorithm. The extracted ECG datasets were formatted into a newly proposed, vendor-neutral, ECG XML data format. Oracle database was used as a repository for ECG records in XML format. The proposed technique for XML encapsulation of OEWR processed hard copy records resulted in an efficient method for inclusion of paper ECG records into research databases, thus providing their preservation, reuse and accession.

  5. [The RUTA project (Registro UTIC Triveneto ANMCO). An e-network for the coronary care units for acute myocardial infarction].

    PubMed

    Di Chiara, Antonio; Zonzin, Pietro; Pavoni, Daisy; Fioretti, Paolo Maria

    2003-06-01

    In the era of evidence-based medicine, the monitoring of the adherence to the guidelines is fundamental, in order to verify the diagnostic and therapeutic processes. Informatic paperless databases allow a higher data quality, lower costs and timely analysis with overall advantages over the traditional surveys. The RUTA project (acronym of Triveneto Registry of ANMCO CCUs) was designed in 1999, aiming at creating an informatic network among the coronary care units of a large Italian region, for a permanent survey of patients admitted for acute myocardial infarction. Information ranges from the pre-hospital phase to discharge, including all relevant clinical and management variables. The database uses DBMS Personal Oracle and Power-Builder as user interface, on Windows platform. Anonymous data are sent to a central server.

  6. The CEBAF Element Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly withmore » no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.« less

  7. A cognitive network for oracle-bone characters related to animals

    NASA Astrophysics Data System (ADS)

    Dress. Andreas; Grünewald, Stefan; Zeng, Zhenbing

    This paper is dedicated to HAO Bailin on the occasion of his eighties birthday, the great scholar and very good friend who never tired to introduce us to the wonderful and complex intricacies of Chinese culture and history. In this paper, we present an analysis of oracle-bone characters for animals from a `cognitive' point of view. After some general remarks on oraclebone characters presented in Section 1 and a short outline of the paper in Section 2, we collect various oracle-bone characters for animals from published resources in Section 3. In the next section, we begin analysing a group of 60 ancient animal characters from www.zdic.net, a highly acclaimed internet dictionary of Chinese characters that is strictly based on historical sources, and introduce five categories of specific features regarding their (graphical) structure that will be used in Section 5 to associate corresponding feature vectors to these characters. In Section 6, these feature vectors will be used to investigate their dissimilarity in terms of a family of parameterised distance measures. And in the last section, we apply the SplitsTree method as encoded in the NeighbourNet algorithms to construct a corresponding family of dissimilarity-based networks with the intention of elucidating how the ancient Chinese might have perceived the `animal world' in the late bronze age and to demonstrate that these pictographs reflect an intuitive understanding of this world and its inherent structure that predates its classification in the oldest surviving Chinese encyclopedia from approximately the 3rd century BC, the ErYa, as well as similar classification systems in the West by one to two millennia. We also present an English dictionary of 70 oracle-bone characters for animals in Appendix 1. In Appendix 2, we list various variants of animal characters that were published in the Jia Gu Wen Bian (cf. , A Complete Collection of Oracle Bone Characters, edited by the Institute of Archaeology of the Chinese Academy of Social Sciences, published by the Zhonghua Book Company in 1965). We recall the frequencies of the 521 most frequent oracle-bone characters in Appendix 3 as reported in [7, 8]. And in Appendix 4, we list the animals registered in the last five chapters of the ErYa.

  8. The effect of prepartum antibiotics on the type of neonatal bacteraemia: insights from the MRC ORACLE trials.

    PubMed

    Gilbert, R E; Pike, K; Kenyon, S L; Tarnow-Mordi, W; Taylor, D J

    2005-06-01

    We analysed the type of bacteraemia before discharge from Neonatal Intensive Care Units in babies born to women randomised to the MRC ORACLE Trials. There was no evidence for an effect of oral antibiotics given prior to delivery on bacteraemia due to gram negative or enterococci bacteria, but Group B streptococcal (GBS) bacteraemia was significantly reduced in women with preterm prelabour rupture of the membranes (1.58% to 0.55%, relative risk 0.34; 95% CI: 0.17-0.70). There was no detectable effect in women in spontaneous preterm labour with intact membranes as the risk of GBS bacteraemia in their babies was very small regardless of treatment.

  9. AirMSPI ORACLES Cloud Droplet Data V001

    Atmospheric Science Data Center

    2018-05-05

    AirMSPI_ORACLES_Cloud_Droplet_Size_and_Cloud_Optical_Depth L2 Derived Geophysical Parameters ... Order: Earthdata Search Parameters:  Cloud Optical Depth Cloud Droplet Effective Radius Cloud Droplet ...

  10. Seeking an oracle: using the Delphi process to develop practice guidelines for the treatment of endometriosis with Chinese herbal medicine.

    PubMed

    Flower, Andrew; Lewith, George T; Little, Paul

    2007-11-01

    For most complementary and alternative medicine interventions, the absence of a high-quality evidence base to define good practice presents a serious problem for clinicians, educators, and researchers. The Delphi process may offer a pragmatic way to establish good practice guidelines until more rigorous forms of assessment can be undertaken. To use a modified Delphi to develop good practice guidelines for a feasibility study exploring the role of Chinese herbal medicine (CHM) in the treatment of endometriosis. To compare the outcomes from Delphi with data derived from a systematic review of the Chinese language database. An expert group was convened for a three-round Delphi that initially produced key statements relating to the CHM diagnosis and treatment of endometriosis (round 1) and then anonymously rated these on a 1-7 Likert scale (rounds 2 and 3). Statements with a median score of 5 and above were regarded as demonstrating positive group consensus. The differential diagnoses within Chinese Medicine and rating of the clinical value of individual herbs were then contrasted with comparable data from a review of Chinese language reports in the Chinese Biomedical Retrieval System (1978-2002), and China Academy of Traditional Chinese Medicine (1985-2002) databases and the Chinese TCM and magazine literature (1984-2004) databases. Consensus (good practice) guidelines for the CHM treatment of endometriosis relating to common diagnostic patterns, herb selection, dosage, and patient management were produced. The Delphi guidelines demonstrated a high degree of congruence with the information from the Chinese language databases. In the absence of rigorous evidence, Delphi offers a way to synthesize expert knowledge relating to diagnosis, patient management, and herbal selection in the treatment of endometriosis. The limitations of the expert group and the inability of Delphi to capture the subtle nuances of individualized clinical decision-making limit the usefulness of this approach.

  11. Web application for detailed real-time database transaction monitoring for CMS condition data

    NASA Astrophysics Data System (ADS)

    de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2012-12-01

    In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.

  12. Assembling proteomics data as a prerequisite for the analysis of large scale experiments

    PubMed Central

    Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleißner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R

    2009-01-01

    Background Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. Results In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. Conclusion The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk. PMID:19166578

  13. WebDB Component Builder - Lessons Learned

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macedo, C.

    2000-02-15

    Oracle WebDB is the easiest way to produce web enabled lightweight and enterprise-centric applications. This concept from Oracle has tantalized our taste for simplistic web development by using a purely web based tool that lives nowhere else but in the database. The use of online wizards, templates, and query builders, which produces PL/SQL behind the curtains, can be used straight ''out of the box'' by both novice and seasoned developers. The topic of this presentation will introduce lessons learned by developing and deploying applications built using the WebDB Component Builder in conjunction with custom PL/SQL code to empower a hybridmore » application. There are two kinds of WebDB components: those that display data to end users via reporting, and those that let end users update data in the database via entry forms. The presentation will also discuss various methods within the Component Builder to enhance the applications pushed to the desktop. The demonstrated example is an application entitled HOME (Helping Other's More Effectively) that was built to manage a yearly United Way Campaign effort. Our task was to build an end to end application which could manage approximately 900 non-profit agencies, an average of 4,100 individual contributions, and $1.2 million dollars. Using WebDB, the shell of the application was put together in a matter of a few weeks. However, we did encounter some hurdles that WebDB, in it's stage of infancy (v2.0), could not solve for us directly. Together with custom PL/SQL, WebDB's Component Builder became a powerful tool that enabled us to produce a very flexible hybrid application.« less

  14. An immunohistochemical and fluorescence in situ hybridization-based comparison between the Oracle HER2 Bond Immunohistochemical System, Dako HercepTest, and Vysis PathVysion HER2 FISH using both commercially validated and modified ASCO/CAP and United Kingdom HER2 IHC scoring guidelines.

    PubMed

    O'Grady, Anthony; Allen, David; Happerfield, Lisa; Johnson, Nicola; Provenzano, Elena; Pinder, Sarah E; Tee, Lilian; Gu, Mai; Kay, Elaine W

    2010-12-01

    Immunohistochemistry (IHC) is used as the frontline assay to determine HER2 status in invasive breast cancer patients. The aim of the study was to compare the performance of the Leica Oracle HER2 Bond IHC System (Oracle) with the current most readily accepted Dako HercepTest (HercepTest), using both commercially validated and modified ASCO/CAP and UK HER2 IHC scoring guidelines. A total of 445 breast cancer samples from 3 international clinical HER2 referral centers were stained with the 2 test systems and scored in a blinded fashion by experienced pathologists. The overall agreement between the 2 tests in a 3×3 (negative, equivocal and positive) analysis shows a concordance of 86.7% and 86.3%, respectively when analyzed using commercially validated and modified ASCO/CAP and UK HER2 IHC scoring guidelines. There is a good concordance between the Oracle and the HercepTest. The advantages of a complete fully automated test such as the Oracle include standardization of key analytical factors and improved turn around time. The implementation of the modified ASCO/CAP and UK HER2 IHC scoring guidelines has minimal effect on either assay interpretation, showing that Oracle can be used as a methodology for accurately determining HER2 IHC status in formalin fixed, paraffin-embedded breast cancer tissue.

  15. A Completely Blind Video Integrity Oracle.

    PubMed

    Mittal, Anish; Saad, Michele A; Bovik, Alan C

    2016-01-01

    Considerable progress has been made toward developing still picture perceptual quality analyzers that do not require any reference picture and that are not trained on human opinion scores of distorted images. However, there do not yet exist any such completely blind video quality assessment (VQA) models. Here, we attempt to bridge this gap by developing a new VQA model called the video intrinsic integrity and distortion evaluation oracle (VIIDEO). The new model does not require the use of any additional information other than the video being quality evaluated. VIIDEO embodies models of intrinsic statistical regularities that are observed in natural vidoes, which are used to quantify disturbances introduced due to distortions. An algorithm derived from the VIIDEO model is thereby able to predict the quality of distorted videos without any external knowledge about the pristine source, anticipated distortions, or human judgments of video quality. Even with such a paucity of information, we are able to show that the VIIDEO algorithm performs much better than the legacy full reference quality measure MSE on the LIVE VQA database and delivers performance comparable with a leading human judgment trained blind VQA model. We believe that the VIIDEO algorithm is a significant step toward making real-time monitoring of completely blind video quality possible.

  16. MRC ORACLE Children Study. Long term outcomes following prescription of antibiotics to pregnant women with either spontaneous preterm labour or preterm rupture of the membranes.

    PubMed

    Kenyon, Sara; Brocklehurst, Peter; Jones, David; Marlow, Neil; Salt, Alison; Taylor, David

    2008-04-24

    The Medical Research Council (MRC) ORACLE trial evaluated the use of co-amoxiclav 375 mg and/or erythromycin 250 mg in women presenting with preterm rupture of membranes (PROM) ORACLE I or in spontaneous preterm labour (SPL) ORACLE II using a factorial design. The results showed that for women with a singleton baby with PROM the prescription of erythromycin is associated with improvements in short term neonatal outcomes, although co-amoxiclav is associated with prolongation of pregnancy, a significantly higher rate of neonatal necrotising enterocolitis was found in these babies. Prescription of erythromycin is now established practice for women with PROM. For women with SPL antibiotics demonstrated no improvements in short term neonatal outcomes and are not recommended treatment. There is evidence that both these conditions are associated with subclinical infection so perinatal antibiotic administration may reduce the risk of later disabilities, including cerebral palsy, although the risk may be increased through exposure to inflammatory cytokines, so assessment of longer term functional and educational outcomes is appropriate. The MRC ORACLE Children's Study will follow up UK children at age 7 years born to 4809 women with PROM and the 4266 women with SPL enrolled in the earlier ORACLE trials. We will use a parental questionnaire including validated tools to assess disability and behaviour. We will collect the frequency of specific medical conditions: cerebral palsy, epilepsy, respiratory illness including asthma, diabetes, admission to hospital in last year and other diseases, as reported by parents. National standard test results will be collected to assess educational attainment at Key Stage 1 for children in England. This study is designed to investigate whether or not peripartum antibiotics improve health and disability for children at 7 years of age. The ORACLE Trial and Children Study is registered in the Current Controlled Trials registry. ISCRTN 52995660.

  17. MRC ORACLE Children Study. Long term outcomes following prescription of antibiotics to pregnant women with either spontaneous preterm labour or preterm rupture of the membranes

    PubMed Central

    Kenyon, Sara; Brocklehurst, Peter; Jones, David; Marlow, Neil; Salt, Alison; Taylor, David

    2008-01-01

    Background The Medical Research Council (MRC) ORACLE trial evaluated the use of co-amoxiclav 375 mg and/or erythromycin 250 mg in women presenting with preterm rupture of membranes (PROM) ORACLE I or in spontaneous preterm labour (SPL) ORACLE II using a factorial design. The results showed that for women with a singleton baby with PROM the prescription of erythromycin is associated with improvements in short term neonatal outcomes, although co-amoxiclav is associated with prolongation of pregnancy, a significantly higher rate of neonatal necrotising enterocolitis was found in these babies. Prescription of erythromycin is now established practice for women with PROM. For women with SPL antibiotics demonstrated no improvements in short term neonatal outcomes and are not recommended treatment. There is evidence that both these conditions are associated with subclinical infection so perinatal antibiotic administration may reduce the risk of later disabilities, including cerebral palsy, although the risk may be increased through exposure to inflammatory cytokines, so assessment of longer term functional and educational outcomes is appropriate. Methods The MRC ORACLE Children's Study will follow up UK children at age 7 years born to 4809 women with PROM and the 4266 women with SPL enrolled in the earlier ORACLE trials. We will use a parental questionnaire including validated tools to assess disability and behaviour. We will collect the frequency of specific medical conditions: cerebral palsy, epilepsy, respiratory illness including asthma, diabetes, admission to hospital in last year and other diseases, as reported by parents. National standard test results will be collected to assess educational attainment at Key Stage 1 for children in England. Discussion This study is designed to investigate whether or not peripartum antibiotics improve health and disability for children at 7 years of age. Trial registration The ORACLE Trial and Children Study is registered in the Current Controlled Trials registry. ISCRTN 52995660 PMID:18435833

  18. ESTminer: a Web interface for mining EST contig and cluster databases.

    PubMed

    Huang, Yecheng; Pumphrey, Janie; Gingle, Alan R

    2005-03-01

    ESTminer is a Web application and database schema for interactive mining of expressed sequence tag (EST) contig and cluster datasets. The Web interface contains a query frame that allows the selection of contigs/clusters with specific cDNA library makeup or a threshold number of members. The results are displayed as color-coded tree nodes, where the color indicates the fractional size of each cDNA library component. The nodes are expandable, revealing library statistics as well as EST or contig members, with links to sequence data, GenBank records or user configurable links. Also, the interface allows 'queries within queries' where the result set of a query is further filtered by the subsequent query. ESTminer is implemented in Java/JSP and the package, including MySQL and Oracle schema creation scripts, is available from http://cggc.agtec.uga.edu/Data/download.asp agingle@uga.edu.

  19. National launch strategy vehicle data management system

    NASA Technical Reports Server (NTRS)

    Cordes, David

    1990-01-01

    The national launch strategy vehicle data management system (NLS/VDMS) was developed as part of the 1990 NASA Summer Faculty Fellowship Program. The system was developed under the guidance of the Engineering Systems Branch of the Information Systems Office, and is intended for use within the Program Development Branch PD34. The NLS/VDMS is an on-line database system that permits the tracking of various launch vehicle configurations within the program development office. The system is designed to permit the definition of new launch vehicles, as well as the ability to display and edit existing launch vehicles. Vehicles can be grouped in logical architectures within the system. Reports generated from this package include vehicle data sheets, architecture data sheets, and vehicle flight rate reports. The topics covered include: (1) system overview; (2) initial system development; (3) supercard hypermedia authoring system; (4) the ORACLE database; and (5) system evaluation.

  20. XRootD popularity on hadoop clusters

    NASA Astrophysics Data System (ADS)

    Meoni, Marco; Boccali, Tommaso; Magini, Nicolò; Menichetti, Luca; Giordano, Domenico; CMS Collaboration

    2017-10-01

    Performance data and metadata of the computing operations at the CMS experiment are collected through a distributed monitoring infrastructure, currently relying on a traditional Oracle database system. This paper shows how to harness Big Data architectures in order to improve the throughput and the efficiency of such monitoring. A large set of operational data - user activities, job submissions, resources, file transfers, site efficiencies, software releases, network traffic, machine logs - is being injected into a readily available Hadoop cluster, via several data streamers. The collected metadata is further organized running fast arbitrary queries; this offers the ability to test several Map&Reduce-based frameworks and measure the system speed-up when compared to the original database infrastructure. By leveraging a quality Hadoop data store and enabling an analytics framework on top, it is possible to design a mining platform to predict dataset popularity and discover patterns and correlations.

  1. Biological data warehousing system for identifying transcriptional regulatory sites from gene expressions of microarray data.

    PubMed

    Tsou, Ann-Ping; Sun, Yi-Ming; Liu, Chia-Lin; Huang, Hsien-Da; Horng, Jorng-Tzong; Tsai, Meng-Feng; Liu, Baw-Juine

    2006-07-01

    Identification of transcriptional regulatory sites plays an important role in the investigation of gene regulation. For this propose, we designed and implemented a data warehouse to integrate multiple heterogeneous biological data sources with data types such as text-file, XML, image, MySQL database model, and Oracle database model. The utility of the biological data warehouse in predicting transcriptional regulatory sites of coregulated genes was explored using a synexpression group derived from a microarray study. Both of the binding sites of known transcription factors and predicted over-represented (OR) oligonucleotides were demonstrated for the gene group. The potential biological roles of both known nucleotides and one OR nucleotide were demonstrated using bioassays. Therefore, the results from the wet-lab experiments reinforce the power and utility of the data warehouse as an approach to the genome-wide search for important transcription regulatory elements that are the key to many complex biological systems.

  2. Asynchronous data change notification between database server and accelerator controls system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-10-10

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less

  3. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  4. Data base management system analysis and performance testing with respect to NASA requirements

    NASA Technical Reports Server (NTRS)

    Martin, E. A.; Sylto, R. V.; Gough, T. L.; Huston, H. A.; Morone, J. J.

    1981-01-01

    Several candidate Data Base Management Systems (DBM's) that could support the NASA End-to-End Data System's Integrated Data Base Management System (IDBMS) Project, later rescoped and renamed the Packet Management System (PMS) were evaluated. The candidate DBMS systems which had to run on the Digital Equipment Corporation VAX 11/780 computer system were ORACLE, SEED and RIM. Oracle and RIM are both based on the relational data base model while SEED employs a CODASYL network approach. A single data base application which managed stratospheric temperature profiles was studied. The primary reasons for using this application were an insufficient volume of available PMS-like data, a mandate to use actual rather than simulated data, and the abundance of available temperature profile data.

  5. A Framework for Testing Scientific Software: A Case Study of Testing Amsterdam Discrete Dipole Approximation Software

    NASA Astrophysics Data System (ADS)

    Shao, Hongbing

    Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.

  6. STATUS OF VARIOUS SNS DIAGNOSTIC SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blokland, Willem; Purcell, J David; Patton, Jeff

    2007-01-01

    The Spallation Neutron Source (SNS) accelerator systems are ramping up to deliver a 1.0 GeV, 1.4 MW proton beam to a liquid mercury target for neutron scattering research. Enhancements or additions have been made to several instrument systems to support the ramp up in intensity, improve reliability, and/or add functionality. The Beam Current Monitors now support increased rep rates, the Harp system now includes charge density calculations for the target, and a new system has been created to collect data for the beam accounting and present the data over the web and to the operator consoles. The majority of themore » SNS beam instruments are PC-based and their configuration files are now managed through the Oracle relational database. A new version for the wire scanner software was developed to add features to correlate the scan with beam loss, parking in the beam, and measuring the longitudinal beam current. This software is currently being tested. This paper also includes data from the selected instruments.« less

  7. GEOS-5 During ORACLES: Status Update

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo; Longo, Karla

    2017-01-01

    In this talk we summarize the GEOS-5 capabilities to be deployed during the ORACLES 2016 Campaign. We describe model configuration, data products and web services available. We also discuss the measurement and flight requirements for the GEOS-5 Team.

  8. Southern Ocean Echinoids database – An updated version of Antarctic, Sub-Antarctic and cold temperate echinoid database

    PubMed Central

    Fabri-Ruiz, Salomé; Saucède, Thomas; Danis, Bruno; David, Bruno

    2017-01-01

    Abstract This database includes over 7,100 georeferenced occurrence records of sea urchins (Echinodermata: Echinoidea) obtained from samples collected in the Southern Ocean (+180°W/+180°E; -35°/-78°S) during oceanographic cruises led over 150 years, from 1872 to 2015. Echinoids are common organisms of Southern Ocean benthic communities. A total of 201 species is recorded, which display contrasting depth ranges and distribution patterns across austral provinces and bioregions. Echinoid species show various ecological traits including different nutrition and reproductive strategies. Information on taxonomy, sampling sites, and sampling sources are also made available. Environmental descriptors that are relevant to echinoid ecology are also made available for the study area (-180°W/+180°E; -45°/-78°S) and for the following decades: 1955–1964, 1965–1974, 1975–1984, 1985–1994 and 1995–2012. They were compiled from different sources and transformed to the same grid cell resolution of 0.1° per pixel. We also provide future projections for environmental descriptors established based on the Bio-Oracle database (Tyberghein et al. 2012). PMID:29134013

  9. Completing the physical representation of quantum algorithms provides a retrocausal explanation of the speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2017-05-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete as it lacks the initial measurement. We extend it to the process of setting the problem. An initial measurement selects a problem setting at random, and a unitary transformation sends it into the desired setting. The extended representation must be with respect to Bob, the problem setter, and any external observer. It cannot be with respect to Alice, the problem solver. It would tell her the problem setting and thus the solution of the problem implicit in it. In the representation to Alice, the projection of the quantum state due to the initial measurement should be postponed until the end of the quantum algorithm. In either representation, there is a unitary transformation between the initial and final measurement outcomes. As a consequence, the final measurement of any ℛ-th part of the solution could select back in time a corresponding part of the random outcome of the initial measurement; the associated projection of the quantum state should be advanced by the inverse of that unitary transformation. This, in the representation to Alice, would tell her, before she begins her problem solving action, that part of the solution. The quantum algorithm should be seen as a sum over classical histories in each of which Alice knows in advance one of the possible ℛ-th parts of the solution and performs the oracle queries still needed to find it - this for the value of ℛ that explains the algorithm's speedup. We have a relation between retrocausality ℛ and the number of oracle queries needed to solve an oracle problem quantumly. All the oracle problems examined can be solved with any value of ℛ up to an upper bound attained by the optimal quantum algorithm. This bound is always in the vicinity of 1/2 . Moreover, ℛ =1/2 always provides the order of magnitude of the number of queries needed to solve the problem in an optimal quantum way. If this were true for any oracle problem, as plausible, it would solve the quantum query complexity problem.

  10. Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT

    NASA Astrophysics Data System (ADS)

    Livschitz, Yakov; Munro, Rosemary; Lang, Rüdiger; Fiedler, Lars; Dyer, Richard; Eisinger, Michael

    2010-05-01

    The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument's health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument's degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.

  11. Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT

    NASA Astrophysics Data System (ADS)

    Livschitz, Y.; Munro, R.; Lang, R.; Fiedler, L.; Dyer, R.; Eisinger, M.

    2009-12-01

    The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument’s health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument’s degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.

  12. FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.

    PubMed

    Mader, Malte; Simon, Ronald; Kurtz, Stefan

    2014-03-31

    A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.

  13. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  14. Make Win-Win a Reality: Delighting the Customer be Implementing Oracle HR - Integration Update to Fall 1997 Paper

    NASA Technical Reports Server (NTRS)

    Mills, J.; Shope, S.

    1998-01-01

    Implementing Oracle Human Resources Management System (HRMS) using a customer and integration approach provides the organization an enormous oppurtunity to create a win-win situation for customers, the HR department and the enterprise.

  15. INTERFACING SAS TO ORACLE IN THE UNIX ENVIRONMENT

    EPA Science Inventory

    SAS is an EPA standard data and statistical analysis software package while ORACLE is EPA's standard data base management system software package. RACLE has the advantage over SAS in data retrieval and storage capabilities but has limited data and statistical analysis capability....

  16. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  17. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  18. Demonstration of quantum superiority in learning parity with noise with superconducting qubits

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus; Ryan, Colm; Cross, Andrew; Smolin, John; Gambetta, Jay; Chow, Jerry; Johnson, Blake

    A problem in machine learning is to identify the function programmed in an unknown device, or oracle, having only access to its output. In particular, a parity function computes the parity of a subset of a bit register. We implement an oracle executing parity functions in a five-qubit superconducting processor and compare the performance of a classical and a quantum learner. The classical learner reads the output of multiple oracle calls and uses the results to infer the hidden function. In addition to querying the oracle, the quantum learner can apply coherent rotations on the output register before the readout. We show that, given a target success probability, the quantum approach outperforms the classical one in the number of queries needed. Moreover, this gap increases with readout noise and with the size of the qubit register. This result shows that quantum advantage can already emerge in current systems with a few, noisy qubits. We acknowledge support from IARPA under Contract W911NF-10-1-0324.

  19. The PEPR GeneChip data warehouse, and implementation of a dynamic time series query tool (SGQT) with graphical interface.

    PubMed

    Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B; Almon, Richard R; DuBois, Debra C; Jusko, William J; Hoffman, Eric P

    2004-01-01

    Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp).

  20. The PEPR GeneChip data warehouse, and implementation of a dynamic time series query tool (SGQT) with graphical interface

    PubMed Central

    Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B.; Almon, Richard R.; DuBois, Debra C.; Jusko, William J.; Hoffman, Eric P.

    2004-01-01

    Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp). PMID:14681485

  1. External access to ALICE controls conditions data

    NASA Astrophysics Data System (ADS)

    Jadlovský, J.; Jadlovská, A.; Sarnovský, J.; Jajčišin, Š.; Čopík, M.; Jadlovská, S.; Papcun, P.; Bielek, R.; Čerkala, J.; Kopčík, M.; Chochula, P.; Augustinus, A.

    2014-06-01

    ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple WINCCOA scripts, which retrieves and stores data in a form usable for further studies. This relatively simple procedure generates a lot of administrative overhead - users have to request the data, experts needed to run the script, the results have to be exported outside of the experiment network. The new mechanism profits from database replica, which is running on the CERN campus network. Access to this database is not restricted and there is no risk of generating a heavy load affecting the operation of the experiment. The developed tools presented in this paper allow for access to this data. The users can use web-based tools to generate the requests, consisting of the data identifiers and period of time of interest. The administrators maintain full control over the data - an authorization and authentication mechanism helps to assign privileges to selected users and restrict access to certain groups of data. Advanced caching mechanism allows the user to profit from the presence of already processed data sets. This feature significantly reduces the time required for debugging as the retrieval of raw data can last tens of minutes. A highly configurable client allows for information retrieval bypassing the interactive interface. This method is for example used by ALICE Offline to extract operational conditions after a run is completed. Last but not least, the software can be easily adopted to any underlying database structure and is therefore not limited to WINCCOA.

  2. FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research

    PubMed Central

    2014-01-01

    Background A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. Results We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. Conclusions The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle. PMID:24684958

  3. Digital conversion of INEL archeological data using ARC/INFO and Oracle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, R.D.; Brizzee, J.; White, L.

    1993-11-04

    This report documents the procedures used to convert archaeological data for the INEL to digital format, lists the equipment used, and explains the verification and validation steps taken to check data entry. It also details the production of an engineered interface between ARC/INFO and Oracle.

  4. 78 FR 23220 - Foreign-Trade Zone (FTZ) 230-Piedmont Triad Area, North Carolina; Notification of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-18

    ...-backed paperboard and to laminate plastic film (the laminating activity is not ``production'' activity...--Piedmont Triad Area, North Carolina; Notification of Proposed Production Activity; Oracle Flexible..., grantee of FTZ 230, submitted a notification of proposed production activity on behalf of Oracle Flexible...

  5. Test oracle automation for V&V of an autonomous spacecraft's planner

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Smith, B.

    2001-01-01

    We built automation to assist the software testing efforts associated with the Remote Agent experiment. In particular, our focus was upon introducing test oracles into the testing of the planning and scheduling system component. This summary is intended to provide an overview of the work.

  6. How to Take HRMS Process Management to the Next Level with Workflow Business Event System

    NASA Technical Reports Server (NTRS)

    Rajeshuni, Sarala; Yagubian, Aram; Kunamaneni, Krishna

    2006-01-01

    Oracle Workflow with the Business Event System offers a complete process management solution for enterprises to manage business processes cost-effectively. Using Workflow event messaging, event subscriptions, AQ Servlet and advanced queuing technologies, this presentation will demonstrate the step-by-step design and implementation of system solutions in order to integrate two dissimilar systems and establish communication remotely. As a case study, the presentation walks you through the process of propagating organization name changes in other applications that originated from the HRMS module without changing applications code. The solution can be applied to your particular business cases for streamlining or modifying business processes across Oracle and non-Oracle applications.

  7. Apollo: Changing the Way We Work.

    ERIC Educational Resources Information Center

    Schroeder, John R.; Bleed, Ron

    In January 1994, Arizona's Maricopa Community College District issued a request for proposals to develop new administrative software applications to solve problems related to high maintenance costs for existing systems and difficulties in updating software. The result was the Apollo Project, in which the District contracted with Oracle Corporation…

  8. Introducing ORACLE: Library Processing in a Multi-User Environment.

    ERIC Educational Resources Information Center

    Queensland Library Board, Brisbane (Australia).

    Currently being developed by the State Library of Queensland, Australia, ORACLE (On-Line Retrieval of Acquisitions, Cataloguing, and Circulation Details for Library Enquiries) is a computerized library system designed to provide rapid processing of library materials in a multi-user environment. It is based on the Australian MARC format and fully…

  9. An ORACLE Chronicle: A Decade of Classroom Research.

    ERIC Educational Resources Information Center

    Galton, Maurice

    1987-01-01

    This article describes Project ORACLE which was research carried out at the University of Leicester begun in 1975 concerning (1) a longitudinal process-product study of teaching and learning in elementary schools; and (2) a study which concentrated on collaborative group work in the same classrooms. Results and implications are discussed.…

  10. AirMSPI Level 2 V001 New Data for NASA's ORACLES Campaign

    Atmospheric Science Data Center

    2018-05-07

    AirMSPI Level 2 V001 New Data for NASA's ORACLES Campaign Friday, February 2, 2018 The NASA Langley Atmospheric Sciences Data Center (ASDC) and Jet Propulsion ... ) flight campaign.   AirMSPI flies in the nose of NASA's high-altitude ER-2 aircraft. The instrument was built by JPL and the ...

  11. 78 FR 45181 - Foreign-Trade Zone 230-Piedmont Triad Area, North Carolina, Authorization of Production Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-26

    ... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [B-31-2013] Foreign-Trade Zone 230--Piedmont Triad Area, North Carolina, Authorization of Production Activity, Oracle Flexible Packaging, Inc., (Foil... (FTZ) Board on behalf of Oracle Flexible Packaging, Inc., within Site 28, in Winston-Salem, North...

  12. Development of the updated system of city underground pipelines based on Visual Studio

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong

    2009-10-01

    Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.

  13. Java RMI Software Technology for the Payload Planning System of the International Space Station

    NASA Technical Reports Server (NTRS)

    Bryant, Barrett R.

    1999-01-01

    The Payload Planning System is for experiment planning on the International Space Station. The planning process has a number of different aspects which need to be stored in a database which is then used to generate reports on the planning process in a variety of formats. This process is currently structured as a 3-tier client/server software architecture comprised of a Java applet at the front end, a Java server in the middle, and an Oracle database in the third tier. This system presently uses CGI, the Common Gateway Interface, to communicate between the user-interface and server tiers and Active Data Objects (ADO) to communicate between the server and database tiers. This project investigated other methods and tools for performing the communications between the three tiers of the current system so that both the system performance and software development time could be improved. We specifically found that for the hardware and software platforms that PPS is required to run on, the best solution is to use Java Remote Method Invocation (RMI) for communication between the client and server and SQLJ (Structured Query Language for Java) for server interaction with the database. Prototype implementations showed that RMI combined with SQLJ significantly improved performance and also greatly facilitated construction of the communication software.

  14. The CMS dataset bookkeeping service

    NASA Astrophysics Data System (ADS)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  15. Design of Instant Messaging System of Multi-language E-commerce Platform

    NASA Astrophysics Data System (ADS)

    Yang, Heng; Chen, Xinyi; Li, Jiajia; Cao, Yaru

    2017-09-01

    This paper aims at researching the message system in the instant messaging system based on the multi-language e-commerce platform in order to design the instant messaging system in multi-language environment and exhibit the national characteristics based information as well as applying national languages to e-commerce. In order to develop beautiful and friendly system interface for the front end of the message system and reduce the development cost, the mature jQuery framework is adopted in this paper. The high-performance server Tomcat is adopted at the back end to process user requests, and MySQL database is adopted for data storage to persistently store user data, and meanwhile Oracle database is adopted as the message buffer for system optimization. Moreover, AJAX technology is adopted for the client to actively pull the newest data from the server at the specified time. In practical application, the system has strong reliability, good expansibility, short response time, high system throughput capacity and high user concurrency.

  16. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research

    PubMed Central

    Amin, Waqas; Singh, Harpreet; Pople, Andre K.; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V.; Becich, Michael J.

    2010-01-01

    Context: Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Design: Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. Result: The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. Conclusion: These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems. PMID:20922029

  17. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research.

    PubMed

    Amin, Waqas; Singh, Harpreet; Pople, Andre K; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V; Becich, Michael J

    2010-08-10

    Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems.

  18. E.S.T. and the Oracle.

    ERIC Educational Resources Information Center

    Richardson, Ian M.

    1990-01-01

    A possible syllabus for English for Science and Technology is suggested based upon a set of causal relations, arising from a logical description of the presuppositional rhetoric of scientific passages that underlie most semantic functions. An empirical study is reported of the semantic functions present in 52 randomly selected passages.…

  19. MATHEMATICS PANEL PROGRESS REPORT FOR PERIOD MARCH 1, 1957 TO AUGUST 31, 1958

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Householder, A.S.

    1959-03-24

    ORACLE operation and programming are summarized, and progress is indicated on various current problems. Work is reviewed on numerical analysis, programming, basic mathematics, biometrics and statistics, ORACLE operations and special codes, and training. Publications and lectures for the report period are listed. (For preceding period see ORNL-2283.) (W.D.M.)

  20. Augmenting Oracle Text with the UMLS for enhanced searching of free-text medical reports.

    PubMed

    Ding, Jing; Erdal, Selnur; Dhaval, Rakesh; Kamal, Jyoti

    2007-10-11

    The intrinsic complexity of free-text medical reports imposes great challenges for information retrieval systems. We have developed a prototype search engine for retrieving clinical reports that leverages the powerful indexing and querying capabilities of Oracle Text, and the rich biomedical domain knowledge and semantic structures that are captured in the UMLS Metathesaurus.

  1. An eCK-Secure Authenticated Key Exchange Protocol without Random Oracles

    NASA Astrophysics Data System (ADS)

    Moriyama, Daisuke; Okamoto, Tatsuaki

    This paper presents a (PKI-based) two-pass authenticated key exchange (AKE) protocol that is secure in the extended Canetti-Krawczyk (eCK) security model. The security of the proposed protocol is proven without random oracles (under three assumptions), and relies on no implementation techniques such as a trick by LaMacchia, Lauter and Mityagin (so-called the NAXOS trick). Since an AKE protocol that is eCK-secure under a NAXOS-like implementation trick will be no more eCK-secure if some realistic information leakage occurs through side-channel attacks, it has been an important open problem how to realize an eCK-secure AKE protocol without using the NAXOS tricks (and without random oracles).

  2. Fuel conditioning facility zone-to-zone transfer administrative controls.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, C. L.

    2000-06-21

    The administrative controls associated with transferring containers from one criticality hazard control zone to another in the Argonne National Laboratory (ANL) Fuel Conditioning Facility (FCF) are described. FCF, located at the ANL-West site near Idaho Falls, Idaho, is used to remotely process spent sodium bonded metallic fuel for disposition. The process involves nearly forty widely varying material forms and types, over fifty specific use container types, and over thirty distinct zones where work activities occur. During 1999, over five thousand transfers from one zone to another were conducted. Limits are placed on mass, material form and type, and container typesmore » for each zone. Ml material and containers are tracked using the Mass Tracking System (MTG). The MTG uses an Oracle database and numerous applications to manage the database. The database stores information specific to the process, including material composition and mass, container identification number and mass, transfer history, and the operators involved in each transfer. The process is controlled using written procedures which specify the zone, containers, and material involved in a task. Transferring a container from one zone to another is called a zone-to-zone transfer (ZZT). ZZTs consist of four distinct phases, select, request, identify, and completion.« less

  3. Children's early reading vocabulary: description and word frequency lists.

    PubMed

    Stuart, Morag; Dixon, Maureen; Masterson, Jackie; Gray, Bob

    2003-12-01

    When constructing stimuli for experimental investigations of cognitive processes in early reading development, researchers have to rely on adult or American children's word frequency counts, as no such counts exist for English children. The present paper introduces a database of children's early reading vocabulary, for use by researchers and teachers. Texts from 685 books from reading schemes and story books read by 5-7 year-old children were used in the construction of the database. All words from the 685 books were typed or scanned into an Oracle database. The resulting up-to-date word frequency list of early print exposure in the UK is available in two forms from a website address given in this paper. This allows access to one list of the words ordered alphabetically and one list of the words ordered by frequency. We also briefly address some fundamental issues underlying early reading vocabulary (e.g., that it is heavily skewed towards low frequencies). Other characteristics of the vocabulary are then discussed. We hope the word frequency lists will be of use to researchers seeking to control word frequency, and to teachers interested in the vocabulary to which young children are exposed in their reading material.

  4. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less

  5. Ozone Research with Advanced Cooperative Lidar Experiment (ORACLE) Implementation Study

    NASA Technical Reports Server (NTRS)

    Stadler, John H.; Browell, Edward V.; Ismail, Syed; Dudelzak, Alexander E.; Ball, Donald J.

    1998-01-01

    New technological advances have made possible new active remote sensing capabilities from space. Utilizing these technologies, the Ozone Research with Advanced Cooperative Lidar Experiment (ORACLE) will provide high spatial resolution measurements of ozone, clouds and aerosols in the stratosphere and lower troposphere. Simultaneous measurements of ozone, clouds and aerosols will assist in the understanding of global change, atmospheric chemistry and meteorology.

  6. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL

    PubMed Central

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-01-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities. PMID:24086091

  7. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    PubMed

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  8. PROVIDE: A Pedagogical Reference Oracle for Virtual IntegrateD E-ducation

    ERIC Educational Resources Information Center

    Narasimhan, V. Lakshmi; Zhao, Shuxin; Liang, Hailong; Zhang, Shuangyi

    2006-01-01

    This paper presents an interactive educational environment for use over both "in situ" and distance-based modalities of teaching. Several technological issues relating to the design and development of the distributed virtual learning environment have also been raised. The PROVIDE framework proposed in this paper is a seamless distributed…

  9. Pedagogy, Performance and Politics: Issues in the Study of Primary Education.

    ERIC Educational Resources Information Center

    Campbell, R. J.; Kyriakides, L.

    2000-01-01

    Presents an essay review of Inside the Primary Classroom, 20 Years On, which reports on a 1976 study of teacher-student interaction in English elementary classrooms (the ORACLE project) and discusses the next 20 years of elementary education in England, examining curriculum reform, educational policy, educational research, and related politics.…

  10. The Delphic oracle: a multidisciplinary defense of the gaseous vent theory.

    PubMed

    Spiller, Henry A; Hale, John R; De Boer, Jelle Z

    2002-01-01

    Ancient historical references consistently describe an intoxicating gas, produced by a cavern in the ground, as the source of the power at the oracle of Delphi. These ancient writings are supported by a series of associated geological findings. Chemical analysis of the spring waters and travertine deposits at the site show these gases to be the light hydrocarbon gases methane, ethane, and ethylene. The effects of inhaling ethylene, a major anesthetic gas in the mid-20th century, are similar to those described in the ancient writings. We believe the probable cause of the trancelike state of the Priestess (the Pythia) at the oracle of Delphi during her mantic sessions was produced by inhaling ethylene gas or a mixture of ethylene and ethane from a naturally occurring vent of geological origin.

  11. Design and Implementation of the CEBAF Element Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-10-01

    With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, andmore » element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.« less

  12. Visualization of historical data for the ATLAS detector controls - DDV

    NASA Astrophysics Data System (ADS)

    Maciejewski, J.; Schlenker, S.

    2017-10-01

    The ATLAS experiment is one of four detectors located on the Large Hardon Collider (LHC) based at CERN. Its detector control system (DCS) stores the slow control data acquired within the back-end of distributed WinCC OA applications, which enables the data to be retrieved for future analysis, debugging and detector development in an Oracle relational database. The ATLAS DCS Data Viewer (DDV) is a client-server application providing access to the historical data outside of the experiment network. The server builds optimized SQL queries, retrieves the data from the database and serves it to the clients via HTTP connections. The server also implements protection methods to prevent malicious use of the database. The client is an AJAX-type web application based on the Vaadin (framework build around the Google Web Toolkit (GWT)) which gives users the possibility to access the data with ease. The DCS metadata can be selected using a column-tree navigation or a search engine supporting regular expressions. The data is visualized by a selection of output modules such as a java script value-over time plots or a lazy loading table widget. Additional plugins give the users the possibility to retrieve the data in ROOT format or as an ASCII file. Control system alarms can also be visualized in a dedicated table if necessary. Python mock-up scripts can be generated by the client, allowing the user to query the pythonic DDV server directly, such that the users can embed the scripts into more complex analysis programs. Users are also able to store searches and output configurations as XML on the server to share with others via URL or to embed in HTML.

  13. Delphi and Cosmovision: Apollo's Absence At the Land of the Hyperboreans and the Time for Consulting the Oracle

    NASA Astrophysics Data System (ADS)

    Liritzis, Ioannis; Castro, Belen

    2013-07-01

    Keeping an exact calendar was important to schedule Delphic festivals. The proper day for a prophecy involved a meticulous calculation, which was carried out by learned priests and ancient philosophers. The month of Bysios on average is February, but in reality it could be any 30-day interval between January and March. Bysios starts with a New Moon, but the beginning of the month is not easily pinpointed and thus Bysios and the 7th day for giving an oracle cannot be identified according to the Gregorian calendar. The celestial motions of Lyra and Cygnus with regards to sunrise and sunset are related to the Delphi temple's orientation and the high altitude of steep cliffs of the Faidriades in front of it. Light from the rising Sun shines at the back of the temple where the statue of the god is located, while the appearance and disappearance of Lyra and Cygnus, two of Apollo's favorite constellations in the Delphic sky, mark the period of absence of the god to the Hyperboreans. This coincides with the 3-month interval from the end of December to the middle of March. During the later part of this period, on the 7th day of Bysios, the oracle was given. At any rate, the Delphic calendar was a lunar-solar-stellar one, and was properly adjusted to coincide with and preserve the seasonal movements of those constellations.

  14. A strategy for quantum algorithm design assisted by machine learning

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung

    2014-07-01

    We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.

  15. Autism Post-Mortem Neuroinformatic Resource: The Autism Tissue Program (ATP) Informatics Portal

    ERIC Educational Resources Information Center

    Brimacombe, Michael B.; Pickett, Richard; Pickett, Jane

    2007-01-01

    The Autism Tissue Program (ATP) was established to oversee and manage brain donations related to neurological research in autism. The ATP Informatics Portal (www.atpportal.org) is an integrated data access system based on Oracle technology, developed to provide access for researchers to information on this rare tissue resource. It also permits…

  16. Quantum computation with classical light: Implementation of the Deutsch-Jozsa algorithm

    NASA Astrophysics Data System (ADS)

    Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2016-05-01

    We propose an optical implementation of the Deutsch-Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  17. Introduction to TETHYS—an interdisciplinary GIS database for studying continental collisions

    NASA Astrophysics Data System (ADS)

    Khan, S. D.; Flower, M. F. J.; Sultan, M. I.; Sandvol, E.

    2006-05-01

    The TETHYS GIS database is being developed as a way to integrate relevant geologic, geophysical, geochemical, geochronologic, and remote sensing data bearing on Tethyan continental plate collisions. The project is predicated on a need for actualistic model 'templates' for interpreting the Earth's geologic record. Because of their time-transgressive character, Tethyan collisions offer 'actualistic' models for features such as continental 'escape', collision-induced upper mantle flow magmatism, and marginal basin opening, associated with modern convergent plate margins. Large integrated geochemical and geophysical databases allow for such models to be tested against the geologic record, leading to a better understanding of continental accretion throughout Earth history. The TETHYS database combines digital topographic and geologic information, remote sensing images, sample-based geochemical, geochronologic, and isotopic data (for pre- and post-collision igneous activity), and data for seismic tomography, shear-wave splitting, space geodesy, and information for plate tectonic reconstructions. Here, we report progress on developing such a database and the tools for manipulating and visualizing integrated 2-, 3-, and 4-d data sets with examples of research applications in progress. Based on an Oracle database system, linked with ArcIMS via ArcSDE, the TETHYS project is an evolving resource for researchers, educators, and others interested in studying the role of plate collisions in the process of continental accretion, and will be accessible as a node of the national Geosciences Cyberinfrastructure Network—GEON via the World-Wide Web and ultra-high speed internet2. Interim partial access to the data and metadata is available at: http://geoinfo.geosc.uh.edu/Tethys/ and http://www.esrs.wmich.edu/tethys.htm. We demonstrate the utility of the TETHYS database in building a framework for lithospheric interactions in continental collision and accretion.

  18. Global Tsunami Database: Adding Geologic Deposits, Proxies, and Tools

    NASA Astrophysics Data System (ADS)

    Brocko, V. R.; Varner, J.

    2007-12-01

    A result of collaboration between NOAA's National Geophysical Data Center (NGDC) and the Cooperative Institute for Research in the Environmental Sciences (CIRES), the Global Tsunami Database includes instrumental records, human observations, and now, information inferred from the geologic record. Deep Ocean Assessment and Reporting of Tsunamis (DART) data, historical reports, and information gleaned from published tsunami deposit research build a multi-faceted view of tsunami hazards and their history around the world. Tsunami history provides clues to what might happen in the future, including frequency of occurrence and maximum wave heights. However, instrumental and written records commonly span too little time to reveal the full range of a region's tsunami hazard. The sedimentary deposits of tsunamis, identified with the aid of modern analogs, increasingly complement instrumental and human observations. By adding the component of tsunamis inferred from the geologic record, the Global Tsunami Database extends the record of tsunamis backward in time. Deposit locations, their estimated age and descriptions of the deposits themselves fill in the tsunami record. Tsunamis inferred from proxies, such as evidence for coseismic subsidence, are included to estimate recurrence intervals, but are flagged to highlight the absence of a physical deposit. Authors may submit their own descriptions and upload digital versions of publications. Users may sort by any populated field, including event, location, region, age of deposit, author, publication type (extract information from peer reviewed publications only, if you wish), grain size, composition, presence/absence of plant material. Users may find tsunami deposit references for a given location, event or author; search for particular properties of tsunami deposits; and even identify potential collaborators. Users may also download public-domain documents. Data and information may be viewed using tools designed to extract and display data from the Oracle database (selection forms, Web Map Services, and Web Feature Services). In addition, the historic tsunami archive (along with related earthquakes and volcanic eruptions) is available in KML (Keyhole Markup Language) format for use with Google Earth and similar geo-viewers.

  19. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (CDC VERSION)

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1989. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  20. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Frisch, H.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1986. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  1. The CMS Data Management System

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Guo, Y.; Kuznetsov, V.; Magini, N.; Wildish, T.

    2014-06-01

    The data management elements in CMS are scalable, modular, and designed to work together. The main components are PhEDEx, the data transfer and location system; the Data Booking Service (DBS), a metadata catalog; and the Data Aggregation Service (DAS), designed to aggregate views and provide them to users and services. Tens of thousands of samples have been cataloged and petabytes of data have been moved since the run began. The modular system has allowed the optimal use of appropriate underlying technologies. In this contribution we will discuss the use of both Oracle and NoSQL databases to implement the data management elements as well as the individual architectures chosen. We will discuss how the data management system functioned during the first run, and what improvements are planned in preparation for 2015.

  2. The CMS dataset bookkeeping service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afaq, Anzar,; /Fermilab; Dolgert, Andrew

    2007-10-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS ismore » available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.« less

  3. Managing Attribute—Value Clinical Trials Data Using the ACT/DB Client—Server Database System

    PubMed Central

    Nadkarni, Prakash M.; Brandt, Cynthia; Frawley, Sandra; Sayward, Frederick G.; Einbinder, Robin; Zelterman, Daniel; Schacter, Lee; Miller, Perry L.

    1998-01-01

    ACT/DB is a client—server database application for storing clinical trials and outcomes data, which is currently undergoing initial pilot use. It stores most of its data in entity—attribute—value form. Such data are segregated according to data type to allow indexing by value when possible, and binary large object data are managed in the same way as other data. ACT/DB lets an investigator design a study rapidly by defining the parameters (or attributes) that are to be gathered, as well as their logical grouping for purposes of display and data entry. ACT/DB generates customizable data entry. The data can be viewed through several standard reports as well as exported as text to external analysis programs. ACT/DB is designed to encourage reuse of parameters across multiple studies and has facilities for dictionary search and maintenance. It uses a Microsoft Access client running on Windows 95 machines, which communicates with an Oracle server running on a UNIX platform. ACT/DB is being used to manage the data for seven studies in its initial deployment. PMID:9524347

  4. Quantum Approaches to Logic Circuit Synthesis and Testing

    DTIC Science & Technology

    2006-06-01

    with n qubits using Octave (Oct), MATLAB (MAT), Blitz++ (B++) and QuIDDPro (QP) with Oracle Design 1. 42 Table 4: Simulating Grover’s...algorithm with n qubits using Octave (Oct), MATLAB (MAT), Blitz++ (B++) and QuIDDPro (QP) with Oracle Design 2. 43 Table 5: Number of Grover iterations...to accurately characterize the effects of gate and systematic error in a quantum circuit that generates remotely entangled EPR pairs. An

  5. The RISC-V Instruction Set Manual Volume 2: Privileged Architecture Version 1.7

    DTIC Science & Technology

    2015-05-09

    DIG07-10227). Additional support came from Par Lab affiliates Nokia, NVIDIA , Oracle, and Samsung. • Project Isis: DoE Award DE-SC0003624. • ASPIRE...STARnet center funded by the Semiconductor Research Corporation . Additional sup- port from ASPIRE industrial sponsor, Intel, and ASPIRE affiliates...Google, Huawei, Nokia, NVIDIA , Oracle, and Samsung. The content of this paper does not necessarily reflect the position or the policy of the US

  6. Quantum random oracle model for quantum digital signature

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  7. Automatic Summarization as a Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.

  8. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  9. Origin, extent and health impacts of air pollution in Sub-Saharan Africa

    NASA Astrophysics Data System (ADS)

    Bauer, S.; Im, U.; Mezuman, K.

    2017-12-01

    Southern Africa produces about a third of the Earth's biomass burning aerosol particles, yet the fate of these particles, their origin, chemical composition and their influence on regional and global climate is poorly understood. These research questions motivated the NASA field campaign ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS). ORACLES is a five year investigation with three Intensive Observation Periods (IOP) designed to study key processes that determine the climate impacts of African biomass burning aerosols. The first IOP has been carried out in 2016. The main focus of the field campaign are aerosol-cloud interactions, however in our first study related to this area we will investigate the aerosol plume itself, its origin, extend and its resulting health impacts. Here we will discuss results using the global mesoscale model NASA GEOS-5 in conjunction with the NASA GISS-E2 climate model to investigate climate and health impacts that are directly related to the anthropogenic fire activities in Sub-Saharan Africa. Focus will be on the SH winter seasons biomass burning events, its contribution to Sub-Saharan air pollution in relationship to other air-pollution sources and its resulting premature mortality.

  10. High Capacity Single Table Performance Design Using Partitioning in Oracle or PostgreSQL

    DTIC Science & Technology

    2012-03-01

    Indicators ( KPIs ) 13  5.  Conclusion 14  List of Symbols, Abbreviations, and Acronyms 15  Distribution List 16 iv List of Figures Figure 1. Oracle...Figure 7. Time to seek and return one record. 4. Additional Key Performance Indicators ( KPIs ) In addition to pure response time, there are other...Laboratory ASM Automatic Storage Management CPU central processing unit I/O input/output KPIs key performance indicators OS operating system

  11. Communication Efficient Gaussian Elimination with Partial Pivoting using a Shape Morphing Data Layout

    DTIC Science & Technology

    2013-02-21

    support comes from ParLab affiliates National Instruments, Nokia, NVIDIA , Oracle and Samsung, as well as MathWorks. Research is also supported by DOE...affiliates National Instruments, Nokia, NVIDIA , Oracle and Samsung, as well as MathWorks. Research is also supported by DOE grants DE-SC0004938, DE-SC0005136...International Business Machines Company , 1966. [17] S. Toledo. Locality of reference in LU decomposition with partial pivoting. SIAM J. Matrix Anal. Appl., 18

  12. Joint Force Quarterly. Number 20, Autumn/Winter 1998-99

    DTIC Science & Technology

    1999-03-01

    harm Sparta?” The oracle answered, “Yes, luxury .”4 To the same question about the Armed Forces, the oracle might answer, “Yes, bureaucracy.” Ever since...Forces today as a volunteer, they must be attracted to the opportunities service can provide. Wearing the uniform has never been about money or...vided full consumer price index cost-of-living ad- justments like their predecessors. This variance in retirement programs diminishes the value of ca

  13. Central Colorado Assessment Project (CCAP)-Geochemical data for rock, sediment, soil, and concentrate sample media

    USGS Publications Warehouse

    Granitto, Matthew; DeWitt, Ed H.; Klein, Terry L.

    2010-01-01

    This database was initiated, designed, and populated to collect and integrate geochemical data from central Colorado in order to facilitate geologic mapping, petrologic studies, mineral resource assessment, definition of geochemical baseline values and statistics, environmental impact assessment, and medical geology. The Microsoft Access database serves as a geochemical data warehouse in support of the Central Colorado Assessment Project (CCAP) and contains data tables describing historical and new quantitative and qualitative geochemical analyses determined by 70 analytical laboratory and field methods for 47,478 rock, sediment, soil, and heavy-mineral concentrate samples. Most samples were collected by U.S. Geological Survey (USGS) personnel and analyzed either in the analytical laboratories of the USGS or by contract with commercial analytical laboratories. These data represent analyses of samples collected as part of various USGS programs and projects. In addition, geochemical data from 7,470 sediment and soil samples collected and analyzed under the Atomic Energy Commission National Uranium Resource Evaluation (NURE) Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program (henceforth called NURE) have been included in this database. In addition to data from 2,377 samples collected and analyzed under CCAP, this dataset includes archived geochemical data originally entered into the in-house Rock Analysis Storage System (RASS) database (used by the USGS from the mid-1960s through the late 1980s) and the in-house PLUTO database (used by the USGS from the mid-1970s through the mid-1990s). All of these data are maintained in the Oracle-based National Geochemical Database (NGDB). Retrievals from the NGDB and from the NURE database were used to generate most of this dataset. In addition, USGS data that have been excluded previously from the NGDB because the data predate earliest USGS geochemical databases, or were once excluded for programmatic reasons, have been included in the CCAP Geochemical Database and are planned to be added to the NGDB.

  14. ATLAS EventIndex general dataflow and monitoring infrastructure

    NASA Astrophysics Data System (ADS)

    Fernández Casaní, Á.; Barberis, D.; Favareto, A.; García Montoro, C.; González de la Hoz, S.; Hřivnáč, J.; Prokoshin, F.; Salt, J.; Sánchez, J.; Többicke, R.; Yuan, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast dataset discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome the performance shortcomings detected during production peaks; an object storage approach is instead used to convey the event index information, and messages to signal their location and status. Recent changes in the Producer/Consumer architecture are also presented in detail, as well as the monitoring infrastructure.

  15. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  16. Open Scenario Study, Phase II Report: Assessment and Development of Approaches for Satisfying Unclassified Scenario Needs

    DTIC Science & Technology

    2010-01-01

    interface, another providing the application logic (a program used to manipulate the data), and a server running Microsoft SQL Server or Oracle RDBMS... Oracle ) • Mysql (Open Source) • Other What application server software will be needed? • Application Server • CGI PHP/Perl (Open Source...are used throughout DoD and serve a variety of functions. While DoD has a codified and institutionalized process for the development of a common set

  17. Control Systems

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Boeing Commercial Airplane Company's Flight Control Department engineers relied on Langley developed software package known as ORACLS to develop an advanced control synthesis package for both continuous and discrete control system. Package was used by Boeing for computerized analysis of new system designs. Resulting applications include a multiple input/output control system for the terrain-following navigation equipment of the Air Forces B-1 Bomber, and another for controlling in flight changes of wing camber on an experimental airplane. ORACLS is one of 1,300 computer programs available from COSMIC.

  18. Equivalence of Szegedy's and coined quantum walks

    NASA Astrophysics Data System (ADS)

    Wong, Thomas G.

    2017-09-01

    Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.

  19. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  20. A general framework to learn surrogate relevance criterion for atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-09-01

    Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates.

  1. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  2. Test-state approach to the quantum search problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrawat, Arun; Nguyen, Le Huy; Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 117597

    2011-05-15

    The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These testmore » states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.« less

  3. Surrogate oracles, generalized dependency and simpler models

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1990-01-01

    Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.

  4. Has publication of the results of the ORACLE Children Study changed practice in the UK?

    PubMed

    Kenyon, S; Pike, K; Jones, D; Brocklehurst, P; Marlow, N; Salt, A; Taylor, D

    2010-10-01

      To investigate whether publication of the results of the ORACLE Children's Study, a 7-year follow-up of the ORACLE trial, changed practice with regard to the routine prescription of antibiotics to women with preterm rupture of membranes or spontaneous preterm labour (intact membranes).   A comparative questionnaire survey of clinical practice in November 2007 (before publication) and March 2009 (after publication).   Lead obstetricians for labour wards of all maternity units in the UK.   Self-administered questionnaires requested information about the routine prescription of antibiotics to women with either preterm rupture of membranes or spontaneous preterm labour (intact membranes).   Change in practice for prescription of antibiotics.   The response rate was 166/214 (78%) in 2007 and 158/209 (76%) in 2009. In total, 120 maternity units responded on both occasions. For women with preterm rupture of membranes, 162/214 (98%) in 2007 and 151/158 (96%) in 2009 maternity units reported that they prescribed antibiotics, with the majority using erythromycin (98%). For women with spontaneous preterm labour (intact membranes), 35/166 (21%) in 2007 and 25/158 (16%) in 2009 maternity units reported that they routinely prescribed antibiotics. The findings from units who responded on both occasions are similar.   There has been little change in the reported prescription of antibiotics to women with either preterm rupture of membranes or spontaneous preterm labour following publication of the ORACLE Children's Study. This suggests that current practice may require updated guidance.

  5. Outcomes at 7 years for babies who developed neonatal necrotising enterocolitis: the ORACLE Children Study.

    PubMed

    Pike, Katie; Brocklehurst, Peter; Jones, David; Kenyon, Sarah; Salt, Alison; Taylor, David; Marlow, Neil

    2012-09-01

    Within the ORACLE Children Study Cohort, the authors have evaluated long-term consequences of the diagnosis of confirmed or suspected neonatal necrotising enterocolitis (NEC) at age of 7 years. Outcomes were assessed using a parental questionnaire, including the Health Utilities Index (HUI-3) to assess functional impairment, and specific medical and behavioural outcomes. Educational outcomes for children in England were explored using national standardised tests. Multiple logistic regression was used to explore independent associates of NEC within the cohort. The authors obtained data for 119 (77%) of 157 children following proven or suspected NEC and compared their outcomes with those of the remaining 6496 children. NEC was associated with an increase in risk of neonatal death (OR 14.6 (95% CI 10.4 to 20.6)). At 7 years, NEC conferred an increased risk of all grades of impairment. Adjusting for confounders, risks persisted for any HUI-3 defined functional impairment (adjusted OR 1.55 (1.05, 2.29)), particularly mild impairment (adjusted OR 1.61 (1.03, 2.53)) both in all NEC children and in those with proven NEC, which appeared to be independent. No behavioural or educational associations were confirmed. Following NEC, children were more likely to suffer bowel problems than non-NEC children (adjusted OR 3.96 (2.06, 7.61)). The ORACLE Children Study provided opportunity for the largest evaluation of school age outcome following neonatal NEC and demonstrates significant long-term consequences of both gut function (presence of stoma, admission for bowel problems and continuing medical care for gut-related problems) and motor, sensory and cognitive outcomes as measured using HUI-3.

  6. A first evaluation of a pedagogical network for medical students at the University Hospital of Rennes.

    PubMed

    Fresnel, A; Jarno, P; Burgun, A; Delamarre, D; Denier, P; Cleret, M; Courtin, C; Seka, L P; Pouliquen, B; Cléran, L; Riou, C; Leduff, F; Lesaux, H; Duvauferrier, R; Le Beux, P

    1998-01-01

    A pedagogical network has been developed at University Hospital of Rennes from 1996. The challenge is to give medical information and informatics tools to all medical students in the clinical wards of the University Hospital. At first, nine wards were connected to the medical school server which is linked to the Internet. Client software electronic mail and WWW Netscape on Macintosh computers. Sever software is set up on Unix SUN providing a local homepage with selected pedagogical resources. These documents are stored in a DBMS database ORACLE and queries can be provided by specialty, authors or disease. The students can access a set of interactive teaching programs or electronic textbooks and can explore the Internet through the library information system and search engines. The teachers can send URL and indexation of pedagogical documents and can produce clinical cases: the database updating will be done by the users. This experience of using Web tools generated enthusiasm when we first introduced it to students. The evaluation shows that if the students can use this training early on, they will adapt the resources of the Internet to their own needs.

  7. ORACLS: A system for linear-quadratic-Gaussian control law design

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1978-01-01

    A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.

  8. Designing the Search Service for Enterprise Portal based on Oracle Universal Content Management

    NASA Astrophysics Data System (ADS)

    Bauer, K. S.; Kuznetsov, D. Y.; Pominov, A. D.

    2017-01-01

    Enterprise Portal is an important part of an organization in informative and innovative space. The portal provides collaboration between employees and the organization. This article gives a valuable background of Enterprise Portal and technologies. The paper presents Oracle WebCenter Portal and UCM Server integration in detail. The focus is on tools for Enterprise Portal and on Search Service in particular. The paper also presents several UML diagrams to describe the use of cases for Search Service and main components of this application.

  9. Neural Network (NN) retrievals of Stratocumulus cloud properties using multi-angle polarimetric observations during ORACLES

    NASA Astrophysics Data System (ADS)

    Segal-Rosenhaimer, M.; Knobelspiesse, K. D.; Redemann, J.; Cairns, B.; Alexandrov, M. D.

    2016-12-01

    The ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) campaign is taking place in the South-East Atlantic during the Austral Spring for three consecutive years from 2016-2018. The study area encompasses one of the Earth's three semi-permanent subtropical Stratocumulus (Sc) cloud decks, and experiences very large aerosol optical depths, mainly biomass burning, originating from Africa. Over time, cloud optical depth (COD), lifetime and cloud microphysics (number concentration, effective radii Reff and precipitation) are expected to be influenced by indirect aerosol effects. These changes play a key role in the energetic balance of the region, and are part of the core investigation objectives of the ORACLES campaign, which acquires measurements of clean and polluted scenes of above cloud aerosols (ACA). Simultaneous retrievals of aerosol and cloud optical properties are being developed (e.g. MODIS, OMI), but still challenging, especially for passive, single viewing angle instruments. By comparison, multiangle polarimetric instruments like RSP (Research Scanning Polarimeter) show promise for detection and quantification of ACA, however, there are no operational retrieval algorithms available yet. Here we describe a new algorithm to retrieve cloud and aerosol optical properties from observations by RSP flown on the ER-2 and P-3 during the 2016 ORACLES campaign. The algorithm is based on training a NN, and is intended to retrieve aerosol and cloud properties simultaneously. However, the first step was to establish the retrieval scheme for low level Sc cloud optical properties. The NN training was based on simulated RSP total and polarized radiances for a range of COD, Reff, and effective variances, spanning 7 wavelength bands and 152 viewing zenith angles. Random and correlated noise were added to the simulations to achieve a more realistic representation of the signals. Before introducing the input variables to the network, the signals are projected on a principle component plane that retains the maximal signal information but minimizes the noise contribution. We will discuss parameter choices for the network and present preliminary results of cloud retrievals from ORACLES, compared with standard RSP low-level cloud retrieval method that has been validated against in situ observations.

  10. Design and Development of a Web-Based Self-Monitoring System to Support Wellness Coaching.

    PubMed

    Zarei, Reza; Kuo, Alex

    2017-01-01

    We analyzed, designed and deployed a web-based, self-monitoring system to support wellness coaching. A wellness coach can plan for clients' exercise and diet through the system and is able to monitor the changes in body dimensions and body composition that the client reports. The system can also visualize the client's data in form of graphs for both the client and the coach. Both parties can also communicate through the messaging feature embedded in the application. A reminder system is also incorporated into the system and sends reminder messages to the clients when their reporting is due. The web-based self-monitoring application uses Oracle 11g XE as the backend database and Application Express 4.2 as user interface development tool. The system allowed users to access, update and modify data through web browser anytime, anywhere, and on any device.

  11. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  12. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis.

    PubMed

    Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu

    2003-11-07

    To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s).

  13. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis

    PubMed Central

    Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu

    2003-01-01

    Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s). PMID:14604444

  14. Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Burnett, R.A.; Carter, R.J.

    The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less

  15. The Database for Aggregate Analysis of ClinicalTrials.gov (AACT) and Subsequent Regrouping by Clinical Specialty

    PubMed Central

    Tasneem, Asba; Aberle, Laura; Ananth, Hari; Chakraborty, Swati; Chiswell, Karen; McCourt, Brian J.; Pietrobon, Ricardo

    2012-01-01

    Background The ClinicalTrials.gov registry provides information regarding characteristics of past, current, and planned clinical studies to patients, clinicians, and researchers; in addition, registry data are available for bulk download. However, issues related to data structure, nomenclature, and changes in data collection over time present challenges to the aggregate analysis and interpretation of these data in general and to the analysis of trials according to clinical specialty in particular. Improving usability of these data could enhance the utility of ClinicalTrials.gov as a research resource. Methods/Principal Results The purpose of our project was twofold. First, we sought to extend the usability of ClinicalTrials.gov for research purposes by developing a database for aggregate analysis of ClinicalTrials.gov (AACT) that contains data from the 96,346 clinical trials registered as of September 27, 2010. Second, we developed and validated a methodology for annotating studies by clinical specialty, using a custom taxonomy employing Medical Subject Heading (MeSH) terms applied by an NLM algorithm, as well as MeSH terms and other disease condition terms provided by study sponsors. Clinical specialists reviewed and annotated MeSH and non-MeSH disease condition terms, and an algorithm was created to classify studies into clinical specialties based on both MeSH and non-MeSH annotations. False positives and false negatives were evaluated by comparing algorithmic classification with manual classification for three specialties. Conclusions/Significance The resulting AACT database features study design attributes parsed into discrete fields, integrated metadata, and an integrated MeSH thesaurus, and is available for download as Oracle extracts (.dmp file and text format). This publicly-accessible dataset will facilitate analysis of studies and permit detailed characterization and analysis of the U.S. clinical trials enterprise as a whole. In addition, the methodology we present for creating specialty datasets may facilitate other efforts to analyze studies by specialty groups. PMID:22438982

  16. The database for aggregate analysis of ClinicalTrials.gov (AACT) and subsequent regrouping by clinical specialty.

    PubMed

    Tasneem, Asba; Aberle, Laura; Ananth, Hari; Chakraborty, Swati; Chiswell, Karen; McCourt, Brian J; Pietrobon, Ricardo

    2012-01-01

    The ClinicalTrials.gov registry provides information regarding characteristics of past, current, and planned clinical studies to patients, clinicians, and researchers; in addition, registry data are available for bulk download. However, issues related to data structure, nomenclature, and changes in data collection over time present challenges to the aggregate analysis and interpretation of these data in general and to the analysis of trials according to clinical specialty in particular. Improving usability of these data could enhance the utility of ClinicalTrials.gov as a research resource. The purpose of our project was twofold. First, we sought to extend the usability of ClinicalTrials.gov for research purposes by developing a database for aggregate analysis of ClinicalTrials.gov (AACT) that contains data from the 96,346 clinical trials registered as of September 27, 2010. Second, we developed and validated a methodology for annotating studies by clinical specialty, using a custom taxonomy employing Medical Subject Heading (MeSH) terms applied by an NLM algorithm, as well as MeSH terms and other disease condition terms provided by study sponsors. Clinical specialists reviewed and annotated MeSH and non-MeSH disease condition terms, and an algorithm was created to classify studies into clinical specialties based on both MeSH and non-MeSH annotations. False positives and false negatives were evaluated by comparing algorithmic classification with manual classification for three specialties. The resulting AACT database features study design attributes parsed into discrete fields, integrated metadata, and an integrated MeSH thesaurus, and is available for download as Oracle extracts (.dmp file and text format). This publicly-accessible dataset will facilitate analysis of studies and permit detailed characterization and analysis of the U.S. clinical trials enterprise as a whole. In addition, the methodology we present for creating specialty datasets may facilitate other efforts to analyze studies by specialty groups.

  17. Archiving and Distributing Seismic Data at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.

    2002-12-01

    The Southern California Earthquake Data Center (SCEDC) archives and provides public access to earthquake parametric and waveform data gathered by the Southern California Seismic Network and since January 1, 2001, the TriNet seismic network, southern California's earthquake monitoring network. The parametric data in the archive includes earthquake locations, magnitudes, moment-tensor solutions and phase picks. The SCEDC waveform archive prior to TriNet consists primarily of short-period, 100-samples-per-second waveforms from the SCSN. The addition of the TriNet array added continuous recordings of 155 broadband stations (20 samples per second or less), and triggered seismograms from 200 accelerometers and 200 short-period instruments. Since the Data Center and TriNet use the same Oracle database system, new earthquake data are available to the seismological community in near real-time. Primary access to the database and waveforms is through the Seismogram Transfer Program (STP) interface. The interface enables users to search the database for earthquake information, phase picks, and continuous and triggered waveform data. Output is available in SAC, miniSEED, and other formats. Both the raw counts format (V0) and the gain-corrected format (V1) of COSMOS (Consortium of Organizations for Strong-Motion Observation Systems) are now supported by STP. EQQuest is an interface to prepackaged waveform data sets for select earthquakes in Southern California stored at the SCEDC. Waveform data for large-magnitude events have been prepared and new data sets will be available for download in near real-time following major events. The parametric data from 1981 to present has been loaded into the Oracle 9.2.0.1 database system and the waveforms for that time period have been converted to mSEED format and are accessible through the STP interface. The DISC optical-disk system (the "jukebox") that currently serves as the mass-storage for the SCEDC is in the process of being replaced with a series of inexpensive high-capacity (1.6 Tbyte) magnetic-disk RAIDs. These systems are built with PC-technology components, using 16 120-Gbyte IDE disks, hot-swappable disk trays, two RAID controllers, dual redundant power supplies and a Linux operating system. The system is configured over a private gigabit network that connects to the two Data Center servers and spans between the Seismological Lab and the USGS. To ensure data integrity, each RAID disk system constantly checks itself against its twin and verifies file integrity using 128-bit MD5 file checksums that are stored separate from the system. The final level of data protection is a Sony AIT-3 tape backup of the files. The primary advantage of the magnetic-disk approach is faster data access because magnetic disk drives have almost no latency. This means that the SCEDC can provide better "on-demand" interactive delivery of the seismograms in the archive.

  18. Metamorphic Testing for Cybersecurity.

    PubMed

    Chen, Tsong Yueh; Kuo, Fei-Ching; Ma, Wenjuan; Susilo, Willy; Towey, Dave; Voas, Jeffrey; Zhou, Zhi Quan

    2016-06-01

    Testing is a major approach for the detection of software defects, including vulnerabilities in security features. This article introduces metamorphic testing (MT), a relatively new testing method, and discusses how the new perspective of MT can help to conduct negative testing as well as to alleviate the oracle problem in the testing of security-related functionality and behavior. As demonstrated by the effectiveness of MT in detecting previously unknown bugs in real-world critical applications such as compilers and code obfuscators, we conclude that software testing of security-related features should be conducted from diverse perspectives in order to achieve greater cybersecurity.

  19. Metamorphic Testing for Cybersecurity

    PubMed Central

    Chen, Tsong Yueh; Kuo, Fei-Ching; Ma, Wenjuan; Susilo, Willy; Towey, Dave; Voas, Jeffrey

    2016-01-01

    Testing is a major approach for the detection of software defects, including vulnerabilities in security features. This article introduces metamorphic testing (MT), a relatively new testing method, and discusses how the new perspective of MT can help to conduct negative testing as well as to alleviate the oracle problem in the testing of security-related functionality and behavior. As demonstrated by the effectiveness of MT in detecting previously unknown bugs in real-world critical applications such as compilers and code obfuscators, we conclude that software testing of security-related features should be conducted from diverse perspectives in order to achieve greater cybersecurity. PMID:27559196

  20. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC a

  1. Irreconcilable difference between quantum walks and adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Wong, Thomas G.; Meyer, David A.

    2016-06-01

    Continuous-time quantum walks and adiabatic quantum evolution are two general techniques for quantum computing, both of which are described by Hamiltonians that govern their evolutions by Schrödinger's equation. In the former, the Hamiltonian is fixed, while in the latter, the Hamiltonian varies with time. As a result, their formulations of Grover's algorithm evolve differently through Hilbert space. We show that this difference is fundamental; they cannot be made to evolve along each other's path without introducing structure more powerful than the standard oracle for unstructured search. For an adiabatic quantum evolution to evolve like the quantum walk search algorithm, it must interpolate between three fixed Hamiltonians, one of which is complex and introduces structure that is stronger than the oracle for unstructured search. Conversely, for a quantum walk to evolve along the path of the adiabatic search algorithm, it must be a chiral quantum walk on a weighted, directed star graph with structure that is also stronger than the oracle for unstructured search. Thus, the two techniques, although similar in being described by Hamiltonians that govern their evolution, compute by fundamentally irreconcilable means.

  2. The Oracle of DEM

    NASA Astrophysics Data System (ADS)

    Gayley, Kenneth

    2013-06-01

    The predictions of the famous Greek oracle of Delphi were just ambiguous enough to seem to convey information, yet the user was only seeing their own thoughts. Are there ways in which X-ray spectral analysis is like that oracle? It is shown using heuristic, generic response functions to mimic actual spectral inversion that the widely known ill conditioning, which makes formal inversion impossible in the presence of random noise, also makes a wide variety of different source distributions (DEMs) produce quite similar X-ray continua and resonance-line fluxes. Indeed, the sole robustly inferable attribute for a thermal, optically thin resonance-line spectrum with normal abundances in CIE is its average temperature. The shape of the DEM distribution, on the other hand, is not well constrained, and may actually depend more on the analysis method, no matter how sophisticated, than on the source plasma. The case is made that X-ray spectra can tell us average temperature, and metallicity, and absorbing column, but the main thing it cannot tell us is the main thing it is most often used to infer: the differential emission measure distribution.

  3. Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms

    NASA Astrophysics Data System (ADS)

    Johansson, Niklas; Larsson, Jan-Åke

    2017-09-01

    A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.

  4. Tactical Operations Analysis Support Facility.

    DTIC Science & Technology

    1981-05-01

    Punch/Reader 2 DMC-11AR DDCMP Micro Processor 2 DMC-11DA Network Link Line Unit 2 DL-11E Async Serial Line Interface 4 Intel IN-1670 448K Words MOS Memory...86 5.3 VIRTUAL PROCESSORS - VAX-11/750 ........................... 89 5.4 A RELATIONAL DATA MANAGEMENT SYSTEM - ORACLE...Central Processing Unit (CPU) is a 16 bit processor for high-speed, real time applications, and for large multi-user, multi- task, time shared

  5. PropBase Query Layer: a single portal to UK subsurface physical property databases

    NASA Astrophysics Data System (ADS)

    Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham

    2013-04-01

    Until recently, the delivery of geological information for industry and public was achieved by geological mapping. Now pervasively available computers mean that 3D geological models can deliver realistic representations of the geometric location of geological units, represented as shells or volumes. The next phase of this process is to populate these with physical properties data that describe subsurface heterogeneity and its associated uncertainty. Achieving this requires capture and serving of physical, hydrological and other property information from diverse sources to populate these models. The British Geological Survey (BGS) holds large volumes of subsurface property data, derived both from their own research data collection and also other, often commercially derived data sources. This can be voxelated to incorporate this data into the models to demonstrate property variation within the subsurface geometry. All property data held by BGS has for many years been stored in relational databases to ensure their long-term continuity. However these have, by necessity, complex structures; each database contains positional reference data and model information, and also metadata such as sample identification information and attributes that define the source and processing. Whilst this is critical to assessing these analyses, it also hugely complicates the understanding of variability of the property under assessment and requires multiple queries to study related datasets making extracting physical properties from these databases difficult. Therefore the PropBase Query Layer has been created to allow simplified aggregation and extraction of all related data and its presentation of complex data in simple, mostly denormalized, tables which combine information from multiple databases into a single system. The structure from each relational database is denormalized in a generalised structure, so that each dataset can be viewed together in a common format using a simple interface. Data are re-engineered to facilitate easy loading. The query layer structure comprises tables, procedures, functions, triggers, views and materialised views. The structure contains a main table PRB_DATA which contains all of the data with the following attribution: • a unique identifier • the data source • the unique identifier from the parent database for traceability • the 3D location • the property type • the property value • the units • necessary qualifiers • precision information and an audit trail Data sources, property type and units are constrained by dictionaries, a key component of the structure which defines what properties and inheritance hierarchies are to be coded and also guides the process as to what and how these are extracted from the structure. Data types served by the Query Layer include site investigation derived geotechnical data, hydrogeology datasets, regional geochemistry, geophysical logs as well as lithological and borehole metadata. The size and complexity of the data sets with multiple parent structures requires a technically robust approach to keep the layer synchronised. This is achieved through Oracle procedures written in PL/SQL containing the logic required to carry out the data manipulation (inserts, updates, deletes) to keep the layer synchronised with the underlying databases either as regular scheduled jobs (weekly, monthly etc) or invoked on demand. The PropBase Query Layer's implementation has enabled rapid data discovery, visualisation and interpretation of geological data with greater ease, simplifying the parametrisation of 3D model volumes and facilitating the study of intra-unit heterogeneity.

  6. Mission Operations Planning and Scheduling System (MOPSS)

    NASA Technical Reports Server (NTRS)

    Wood, Terri; Hempel, Paul

    2011-01-01

    MOPSS is a generic framework that can be configured on the fly to support a wide range of planning and scheduling applications. It is currently used to support seven missions at Goddard Space Flight Center (GSFC) in roles that include science planning, mission planning, and real-time control. Prior to MOPSS, each spacecraft project built its own planning and scheduling capability to plan satellite activities and communications and to create the commands to be uplinked to the spacecraft. This approach required creating a data repository for storing planning and scheduling information, building user interfaces to display data, generating needed scheduling algorithms, and implementing customized external interfaces. Complex scheduling problems that involved reacting to multiple variable situations were analyzed manually. Operators then used the results to add commands to the schedule. Each architecture was unique to specific satellite requirements. MOPSS is an expert system that automates mission operations and frees the flight operations team to concentrate on critical activities. It is easily reconfigured by the flight operations team as the mission evolves. The heart of the system is a custom object-oriented data layer mapped onto an Oracle relational database. The combination of these two technologies allows a user or system engineer to capture any type of scheduling or planning data in the system's generic data storage via a GUI.

  7. Solution to the satisfiability problem using a complete Grover search with trapped ions

    NASA Astrophysics Data System (ADS)

    Yang, Wan-Li; Wei, Hua; Zhou, Fei; Chang, Weng-Long; Feng, Mang

    2009-07-01

    The main idea in the original Grover search (1997 Phys. Rev. Lett. 79 325) is to single out a target state containing the solution to a search problem by amplifying the amplitude of the state, following the Oracle's job, i.e., a black box giving us information about the target state. We design quantum circuits to accomplish a complete Grover search involving both the Oracle's job and the amplification of the target state, which are employed to solve satisfiability (SAT) problems. We explore how to carry out the quantum circuits with currently available ion-trap quantum computing technology.

  8. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    PubMed

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.

  9. A Python object-oriented framework for the CMS alignment and calibration data

    NASA Astrophysics Data System (ADS)

    Dawes, Joshua H.; CMS Collaboration

    2017-10-01

    The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding data-taking period. During the second run of the LHC, new use cases were defined. For the consumption of Conditions Metadata, no common interface existed for the detector experts to use in Python-based custom scripts, resulting in many different querying and transaction management patterns. A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. The tool provides mechanisms for segmenting large sets of conditions while releasing them to the production database, allows for uniform error reporting to the client-side from the server-side and optimizes the data transfer to the server. The architecture of the new service has been developed exploiting many of the features made available by the metadata consumption framework to implement the required improvements. This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies.

  10. Storage and distribution of pathology digital images using integrated web-based viewing systems.

    PubMed

    Marchevsky, Alberto M; Dulbandzhyan, Ronda; Seely, Kevin; Carey, Steve; Duncan, Raymond G

    2002-05-01

    Health care providers have expressed increasing interest in incorporating digital images of gross pathology specimens and photomicrographs in routine pathology reports. To describe the multiple technical and logistical challenges involved in the integration of the various components needed for the development of a system for integrated Web-based viewing, storage, and distribution of digital images in a large health system. An Oracle version 8.1.6 database was developed to store, index, and deploy pathology digital photographs via our Intranet. The database allows for retrieval of images by patient demographics or by SNOMED code information. The Intranet of a large health system accessible from multiple computers located within the medical center and at distant private physician offices. The images can be viewed using any of the workstations of the health system that have authorized access to our Intranet, using a standard browser or a browser configured with an external viewer or inexpensive plug-in software, such as Prizm 2.0. The images can be printed on paper or transferred to film using a digital film recorder. Digital images can also be displayed at pathology conferences by using wireless local area network (LAN) and secure remote technologies. The standardization of technologies and the adoption of a Web interface for all our computer systems allows us to distribute digital images from a pathology database to a potentially large group of users distributed in multiple locations throughout a large medical center.

  11. Managing Complex Interoperability Solutions using Model-Driven Architecture

    DTIC Science & Technology

    2011-06-01

    such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information

  12. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  13. Earth Science Project Office (ESPO) Field Experiences During ORACLES, ATom, KORUS and POSIDON

    NASA Technical Reports Server (NTRS)

    Salazar, Vidal; Zavaleta, Jhony

    2017-01-01

    Very often, scientific field campaigns entail years of planning and incur substantial cost, especially if they involve the operation of large research aircraft in remote locations. Deploying and operating these aircrafts even for short periods of time poses challenges that, if not addressed properly, can have significant negative consequences and potentially jeopardize the success of a scientific campaign. Challenges vary from country to country and range from safety, health, and security risks to differences in cultural and social norms. Our presentation will focus on sharing experiences on the ESPO 2016 conducted field campaigns ORACLES, ATom, KORUS and POSIDON. We will focus on the best practices, lessons learned, international relations and coordination aspects of the country-specific experiences. This presentation will be part of the ICARE Conference (2nd International Conference on Airborne Research for the Environment (ICARE 2017) that will focus on "Developing the infrastructure to meet future scientific challenges". This unique conference and gathering of facility support experts will not only allow for dissemination and sharing of knowledge but also promote collaboration and networking among groups that support scientific research using airborne platforms around the globe.

  14. Interfacing External Quantum Devices to a Universal Quantum Computer

    PubMed Central

    Lagana, Antonio A.; Lohe, Max A.; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. PMID:22216276

  15. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  16. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  17. Identity-Based Verifiably Encrypted Signatures without Random Oracles

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wu, Qianhong; Qin, Bo

    Fair exchange protocol plays an important role in electronic commerce in the case of exchanging digital contracts. Verifiably encrypted signatures provide an optimistic solution to these scenarios with an off-line trusted third party. In this paper, we propose an identity-based verifiably encrypted signature scheme. The scheme is non-interactive to generate verifiably encrypted signatures and the resulting encrypted signature consists of only four group elements. Based on the computational Diffie-Hellman assumption, our scheme is proven secure without using random oracles. To the best of our knowledge, this is the first identity-based verifiably encrypted signature scheme provably secure in the standard model.

  18. Quantitative Evaluation of 3 DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Sylto, R.

    1984-01-01

    Characteristics required for NASA scientific data base management application are listed as well as performance testing objectives. Results obtained for the ORACLE, SEED, and INGRES packages are presented in charts. It is concluded that vendor packages can manage 130 megabytes of data at acceptable load and query rates. Performance tests varying data base designs and various data base management system parameters are valuable to applications for choosing packages and critical to designing effective data bases. An applications productivity increases with the use of data base management system because of enhanced capabilities such as a screen formatter, a reporter writer, and a data dictionary.

  19. Automatic Generation of Test Oracles - From Pilot Studies to Application

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Smith, Ben

    1998-01-01

    There is a trend towards the increased use of automation in V&V. Automation can yield savings in time and effort. For critical systems, where thorough V&V is required, these savings can be substantial. We describe a progression from pilot studies to development and use of V&V automation. We used pilot studies to ascertain opportunities for, and suitability of, automating various analyses whose results would contribute to V&V. These studies culminated in the development of an automatic generator of automated test oracles. This was then applied and extended in the course of testing an Al planning system that is a key component of an autonomous spacecraft.

  20. Interfacing external quantum devices to a universal quantum computer.

    PubMed

    Lagana, Antonio A; Lohe, Max A; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. © 2011 Lagana et al.

  1. Sparse PCA with Oracle Property.

    PubMed

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.

  2. Sparse PCA with Oracle Property

    PubMed Central

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    2014-01-01

    In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971

  3. Oceanography Information System of Spanish Institute of Oceanography (IEO)

    NASA Astrophysics Data System (ADS)

    Tello, Olvido; Gómez, María; González, Sonsoles

    2016-04-01

    Since 1914, the Spanish Institute of Oceanography (IEO) performs multidisciplinary studies of the marine environment. In same case are systematic studies and in others are specific studies for special requirements (El Hierro submarine volcanic episode, spill Prestige, others.). Different methodologies and data acquisition techniques are used depending on studies aims. The acquired data are stored and presented in different formats. The information is organized into different databases according to the subject and the variables represented (geology, fisheries, aquaculture, pollution, habitats, etc.). Related to physical and chemical oceanography data, in 1964 was created the DATA CENTER of IEO (CEDO), in order to organize the data about physical and chemical variables, to standardize this information and to serve the international data network SeaDataNet. www.seadatanet.org. This database integrates data about temperature, salinity, nutrients, and tidal data. CEDO allows consult and download the data. http://indamar.ieo.es On the other hand, related to data about marine species in 1999 was developed SIRENO DATABASE. All data about species collected in oceanographic surveys carried out by researches of IEO, and data from observers on fishing vessels are incorporated in SIRENO database. In this database is stored catch data, biomass, abundance, etc. This system is based on architecture ORACLE. Due to the large amount of information collected over the 100 years of IEO history, there is a clear need to organize, standardize, integrate and relate the different databases and information, and to provide interoperability and access to the information. Consequently, in 2000 it emerged the first initiative to organize the IEO spatial information in an Oceanography Information System, based on a Geographical Information System (GIS). The GIS was consolidated as IEO institutional GIS and was created the Spatial Data Infrastructure of IEO (IDEO) following trend of INSPIRE. All data included in the GIS have their corresponding metadata about ISO19115 and INSPIRE. IDEO is based on Web services, Quality of Services, Open standards, ISO (OGC) and INSPIRE standards, and both provide access to the geographical marine information of IEO. The GIS allows the information to be organized, visualized, consulted and analyzed. The data from different IEO databases are integrated into a GIS corporate Geodatabase (Esri format). This tool is essential in the decision making of aspects like: - Protection of marine environment - Sustainable management of resources - Natural Hazards. - Marine spatial planning. Examples of the use of GIS as a spatial analysis tool are: - Mud volcanoes explored in LIFE-INDEMARES project. - Cartographic series about Spanish continental shelf, developed from data integrated in IEO marine GIS, acquired from oceanographic surveys in ESPACE project. - Cartography developed from the information gathered in Initial Assessment of Marine Strategy Framework Directive. - Studies of natural hazards related to submarine canyons in southeast region marine Spanish. Currently the IEO is participating in many European initiatives, especially in several lots of EMODNET. The IEO besides is working in consonance with INSPIRE, Growth Blue, Horizon 2020, etc., to contribute to, the knowledge of marine environment, its protection and its spatial planning are extremely relevant issues. In order to facilitate the access to the Spatial Data Infrastructure of IEO, the IEO Geoportal was developed in 2012. It mainly involves a metadata catalog, access to the data viewers and Web Services of IDEO. http://www.geo-ideo.ieo.es/geoportalideo/catalog/main/home.page

  4. The ATLAS PanDA Monitoring System and its Evolution

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  5. UCMP and the Internet help hospital libraries share resources.

    PubMed

    Dempsey, R; Weinstein, L

    1999-07-01

    The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years.

  6. UCMP and the Internet help hospital libraries share resources.

    PubMed Central

    Dempsey, R; Weinstein, L

    1999-01-01

    The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years. PMID:10427426

  7. Results of data base management system parameterized performance testing related to GSFC scientific applications

    NASA Technical Reports Server (NTRS)

    Carchedi, C. H.; Gough, T. L.; Huston, H. A.

    1983-01-01

    The results of a variety of tests designed to demonstrate and evaluate the performance of several commercially available data base management system (DBMS) products compatible with the Digital Equipment Corporation VAX 11/780 computer system are summarized. The tests were performed on the INGRES, ORACLE, and SEED DBMS products employing applications that were similar to scientific applications under development by NASA. The objectives of this testing included determining the strength and weaknesses of the candidate systems, performance trade-offs of various design alternatives and the impact of some installation and environmental (computer related) influences.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selectionmore » is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection. Analytical insights lead to valid guiding principles on fusion set size design.« less

  9. Optimizing the Weapons Officer in the Mobility Air Forces

    DTIC Science & Technology

    2015-06-19

    terms of professional development, what should the progression of a Weapons Officer in the MAF look like after graduation from the USAF Weapons School...appropriately. Many KC-135 commanders have openly admitted that they do not know what to do with their WOs at the unit level on a daily basis, and many...UCLA professor, and it relates to the Delphic oracle from ancient Greece whose prophetess would reveal the divine purpose of the gods in order to shape

  10. Quality Management Tools: Facilitating Clinical Research Data Integrity by Utilizing Specialized Reports with Electronic Case Report Forms

    PubMed Central

    Trocky, NM; Fontinha, M

    2005-01-01

    Data collected throughout the course of a clinical research trial must be reviewed for accuracy and completeness continually. The Oracle Clinical® (OC) data management application utilized to capture clinical data facilitates data integrity through pre-programmed validations, edit and range checks, and discrepancy management modules. These functions were not enough. Coupled with the use of specially created reports in Oracle Discoverer® and Integrated Review TM, both ad-hoc query and reporting tools, research staff have enhanced their ability to clean, analyze and report more accurate data captured within and among Case Report Forms (eCRFs) by individual study or across multiple studies. PMID:16779428

  11. Subspace projection method for unstructured searches with noisy quantum oracles using a signal-based quantum emulation device

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.; Ostrove, Corey I.

    2017-01-01

    This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.

  12. Developing an App by Exploiting Web-Based Mobile Technology to Inspect Controlled Substances in Patient Care Units

    PubMed Central

    2017-01-01

    We selected iOS in this study as the App operation system, Objective-C as the programming language, and Oracle as the database to develop an App to inspect controlled substances in patient care units. Using a web-enabled smartphone, pharmacist inspection can be performed on site and the inspection result can be directly recorded into HIS through the Internet, so human error of data translation can be minimized and the work efficiency and data processing can be improved. This system not only is fast and convenient compared to the conventional paperwork, but also provides data security and accuracy. In addition, there are several features to increase inspecting quality: (1) accuracy of drug appearance, (2) foolproof mechanism to avoid input errors or miss, (3) automatic data conversion without human judgments, (4) online alarm of expiry date, and (5) instant inspection result to show not meted items. This study has successfully turned paper-based medication inspection into inspection using a web-based mobile device. PMID:28286761

  13. Developing an App by Exploiting Web-Based Mobile Technology to Inspect Controlled Substances in Patient Care Units.

    PubMed

    Lu, Ying-Hao; Lee, Li-Yao; Chen, Ying-Lan; Cheng, Hsing-I; Tsai, Wen-Tsung; Kuo, Chen-Chun; Chen, Chung-Yu; Huang, Yaw-Bin

    2017-01-01

    We selected iOS in this study as the App operation system, Objective-C as the programming language, and Oracle as the database to develop an App to inspect controlled substances in patient care units. Using a web-enabled smartphone, pharmacist inspection can be performed on site and the inspection result can be directly recorded into HIS through the Internet, so human error of data translation can be minimized and the work efficiency and data processing can be improved. This system not only is fast and convenient compared to the conventional paperwork, but also provides data security and accuracy. In addition, there are several features to increase inspecting quality: (1) accuracy of drug appearance, (2) foolproof mechanism to avoid input errors or miss, (3) automatic data conversion without human judgments, (4) online alarm of expiry date, and (5) instant inspection result to show not meted items. This study has successfully turned paper-based medication inspection into inspection using a web-based mobile device.

  14. Utilizing ORACLE tools within Unix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, R.

    1995-07-01

    Large databases, by their very nature, often serve as repositories of data which may be needed by other systems. The transmission of this data to other systems has in the past involved several layers of human intervention. The Integrated Cargo Data Base (ICDB) developed by Martin Marietta Energy Systems for the Military Traffic Management Command as part of the Worldwide Port System provides data integration and worldwide tracking of cargo that passes through common-user ocean cargo ports. One of the key functions of ICDB is data distribution of a variety of data files to a number of other systems. Developmentmore » of automated data distribution procedures had to deal with the following constraints: (1) variable generation time for data files, (2) use of only current data for data files, (3) use of a minimum number of select statements, (4) creation of unique data files for multiple recipients, (5) automatic transmission of data files to recipients, and (6) avoidance of extensive and long-term data storage.« less

  15. Monitoring of computing resource utilization of the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Rousseau, David; Dimitrov, Gancho; Vukotic, Ilija; Aidel, Osman; Schaffer, Rd; Albrand, Solveig

    2012-12-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  16. Construction of web-based nutrition education contents and searching engine for usage of healthy menu of children

    PubMed Central

    Lee, Tae-Kyong; Chung, Hea-Jung; Park, Hye-Kyung; Lee, Eun-Ju; Nam, Hye-Seon; Jung, Soon-Im; Cho, Jee-Ye; Lee, Jin-Hee; Kim, Gon; Kim, Min-Chan

    2008-01-01

    A diet habit, which is developed in childhood, lasts for a life time. In this sense, nutrition education and early exposure to healthy menus in childhood is important. Children these days have easy access to the internet. Thus, a web-based nutrition education program for children is an effective tool for nutrition education of children. This site provides the material of the nutrition education for children with characters which are personified nutrients. The 151 menus are stored in the site together with video script of the cooking process. The menus are classified by the criteria based on age, menu type and the ethnic origin of the menu. The site provides a search function. There are three kinds of search conditions which are key words, menu type and "between" expression of nutrients such as calorie and other nutrients. The site is developed with the operating system Windows 2003 Server, the web server ZEUS 5, development language JSP, and database management system Oracle 10 g. PMID:20126375

  17. Alaska Geochemical Database (AGDB)-Geochemical data for rock, sediment, soil, mineral, and concentrate sample media

    USGS Publications Warehouse

    Granitto, Matthew; Bailey, Elizabeth A.; Schmidt, Jeanine M.; Shew, Nora B.; Gamble, Bruce M.; Labay, Keith A.

    2011-01-01

    The Alaska Geochemical Database (AGDB) was created and designed to compile and integrate geochemical data from Alaska in order to facilitate geologic mapping, petrologic studies, mineral resource assessments, definition of geochemical baseline values and statistics, environmental impact assessments, and studies in medical geology. This Microsoft Access database serves as a data archive in support of present and future Alaskan geologic and geochemical projects, and contains data tables describing historical and new quantitative and qualitative geochemical analyses. The analytical results were determined by 85 laboratory and field analytical methods on 264,095 rock, sediment, soil, mineral and heavy-mineral concentrate samples. Most samples were collected by U.S. Geological Survey (USGS) personnel and analyzed in USGS laboratories or, under contracts, in commercial analytical laboratories. These data represent analyses of samples collected as part of various USGS programs and projects from 1962 to 2009. In addition, mineralogical data from 18,138 nonmagnetic heavy mineral concentrate samples are included in this database. The AGDB includes historical geochemical data originally archived in the USGS Rock Analysis Storage System (RASS) database, used from the mid-1960s through the late 1980s and the USGS PLUTO database used from the mid-1970s through the mid-1990s. All of these data are currently maintained in the Oracle-based National Geochemical Database (NGDB). Retrievals from the NGDB were used to generate most of the AGDB data set. These data were checked for accuracy regarding sample location, sample media type, and analytical methods used. This arduous process of reviewing, verifying and, where necessary, editing all USGS geochemical data resulted in a significantly improved Alaska geochemical dataset. USGS data that were not previously in the NGDB because the data predate the earliest USGS geochemical databases, or were once excluded for programmatic reasons, are included here in the AGDB and will be added to the NGDB. The AGDB data provided here are the most accurate and complete to date, and should be useful for a wide variety of geochemical studies. The AGDB data provided in the linked database may be updated or changed periodically. The data on the DVD and in the data downloads provided with this report are current as of date of publication.

  18. SU-F-T-101: Insight into Dosimetry Workload and Planning Timelines: A 6 Year Review at One Institution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardan, R; Popple, R; Smith, H

    Purpose: To elucidate realistic clinical treatment planning workload and timelines to improve understanding for patients, payers, and other institutions involved in radiotherapy processes. Methods: A web based tool was developed using Oracle Express (Oracle Corp, Redwood City, CA) which allowed communication between the physicians and staff about the current state of the patient plan. For 6 years, all patient courses were logged and time-stamped in 22 discreet steps which detailed start and stop times for simulation, contouring, and treatment planning tasks. This data was combined with the treatment planning database (TPDB) using the Eclipse Scripting API (Varian Medical Systems, Palomore » Alto, CA) to cross-identify plans between the two systems. This time data was analyzed across our dosimetry staff and treatment modality. Results: In 6 years, 110,477 patient statuses were time-logged for 9683 courses of treatment using our internal software. The courses contained 8305 unique patients who were binned into one of 11 diagnosis site categories. 8253 courses could be reconciled against the TPDB using timestamp data from patient statuses. The average planning volume per dosimetrist was 375.8 ± 142.4 plans per year with the average number of planning revisions per dosimetrist of 71.0 ± 27.1 plans per year. The median treatment planning times by modality ranged from to 48.3 hours for IMRT plans 5 fields or less to 119.6 hours for IMRT with 8 or more fields. Two arc VMAT, three arc VMAT, and 3D plans median times were 89.1 hours, 113.8 hours, and 50.9 hours respectively. Conclusion: Using our web based tool, we have demonstrated the ability to quantify treatment planning timelines and workloads which could help in setting appropriate expectations for patients, payers, and hospital administration. COI: Author received monies from Varian Medical Systems for research and teaching honorarium.« less

  19. Routine educational outcome measures in health studies: Key Stage 1 in the ORACLE Children Study follow-up of randomised trial cohorts.

    PubMed

    Jones, David R; Pike, Katie; Kenyon, Sara; Pike, Laura; Henderson, Brian; Brocklehurst, Peter; Marlow, Neil; Salt, Alison; Taylor, David J

    2011-01-01

    Statutory educational attainment measures are rarely used as health study outcomes, but Key Stage 1 (KS1) data formed secondary outcomes in the long-term follow-up to age 7 years of the ORACLE II trial of antibiotic use in preterm babies. This paper describes the approach, compares different approaches to analysis of the KS1 data and compares use of summary KS1 (level) data with use of individual question scores. 3394 children born to women in the ORACLE Children Study and resident in England at age 7. Analysis of educational achievement measured by national end of KS1 data (KS1) using Poisson regression modelling and anchoring of the KS1 data using external standards. KS1 summary level data were obtained for 3239 (95%) eligible children; raw individual question scores were obtained for 1899 (54%). Use of individual question scores where available did not change the conclusion of no evidence of treatment effects based on summary KS1 outcome data. When accessible for medical research purposes, routinely collected educational outcome data may have advantages of low cost and standardised definition. Here, summary scores lead to similar conclusions to raw (individual question) scores and so are attractive and cost-effective alternatives.

  20. Active faulting at Delphi, Greece: Seismotectonic remarks and a hypothesis for the geologic environment of a myth

    NASA Astrophysics Data System (ADS)

    Piccardi, Luigi

    2000-07-01

    Historical data are fundamental to the understanding of the seismic history of an area. At the same time, knowledge of the active tectonic processes allows us to understand how earthquakes have been perceived by past cultures. Delphi is one of the principal archaeological sites of Greece, the main oracle of Apollo. It was by far the most venerated oracle of the Greek ancient world. According to tradition, the mantic proprieties of the oracle were obtained from an open chasm in the earth. Delphi is directly above one of the main antithetic active faults of the Gulf of Corinth Rift, which bounds Mount Parnassus to the south. The geometry of the fault and slip-parallel lineations on the main fault plane indicate normal movement, with minor right-lateral slip component. Combining tectonic data, archaeological evidence, historical sources, and a reexamination of myths, it appears that the Helice earthquake of 373 B.C. ruptured not only the master fault of the Gulf of Corinth Rift at Helice, but also the antithetic fault at Delphi, similarly to the Corinth earthquake of 1981. Moreover, the presence of an active fault directly below the temples of the oldest sanctuary suggests that the mythological oracular chasm might well have been an ancient tectonic surface rupture.

  1. Childhood outcomes after prescription of antibiotics to pregnant women with preterm rupture of the membranes: 7-year follow-up of the ORACLE I trial.

    PubMed

    Kenyon, S; Pike, K; Jones, D R; Brocklehurst, P; Marlow, N; Salt, A; Taylor, D J

    2008-10-11

    The ORACLE I trial compared the use of erythromycin and/or amoxicillin-clavulanate (co-amoxiclav) with that of placebo for women with preterm rupture of the membranes without overt signs of clinical infection, by use of a factorial randomised design. The aim of the present study--the ORACLE Children Study I--was to determine the long-term effects on children of these interventions. We assessed children at age 7 years born to the 4148 women who had completed the ORACLE I trial and who were eligible for follow-up with a structured parental questionnaire to assess the child's health status. Functional impairment was defined as the presence of any level of functional impairment (severe, moderate, or mild) derived from the mark III Multi-Attribute Health Status classification system. Educational outcomes were assessed with national curriculum test results for children resident in England. Outcome was determined for 3298 (75%) eligible children. There was no difference in the proportion of children with any functional impairment after prescription of erythromycin, with or without co-amoxiclav, compared with those born to mothers who received no erythromycin (594 [38.3%] of 1551 children vs 655 [40.4%] of 1620; odds ratio 0.91, 95% CI 0.79-1.05) or after prescription of co-amoxiclav, with or without erythromycin, compared with those born to mothers who received no co-amoxiclav (645 [40.6%] of 1587 vs 604 [38.1%] of 1584; 1.11, 0.96-1.28). Neither antibiotic had a significant effect on the overall level of behavioural difficulties experienced, on specific medical conditions, or on the proportions of children achieving each level in reading, writing, or mathematics at key stage one. The prescription of antibiotics for women with preterm rupture of the membranes seems to have little effect on the health of children at 7 years of age. UK Medical Research Council.

  2. Some Simple Formulas for Posterior Convergence Rates

    PubMed Central

    2014-01-01

    We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278

  3. Just tooling around: Experiences with arctools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuttle, M.A.

    1994-06-01

    The three US Department of Energy (DOE) Installations on the Oak Ridge Reservation (Oak Ridge National Laboratory, Y-12 and K-25) were established during World War II as part of the Manhattan Project to build ``the bomb.`` In later years, the work at these facilities involved nuclear energy research, defense-related activities, and uranium enrichment, resulting in the generation of radioactive material and other toxic by-products. Work is now in progress to identify and clean up the environmental contamination from these and other wastes. Martin Marietta Energy Systems, Inc., which manages the Oak Ridge sites as well as DOE installations at Portsmouth,more » Ohio and Paducah, Kentucky, has been charged with creating and maintaining a comprehensive environmental information system in order to comply with the Federal Facility Agreement (FFA) for the Oak Ridge Reservation and the State of Tennessee Oversight Agreement between the US Department of Energy and the State of Tennessee. As a result, the Oak Ridge Environmental Information System (OREIS) was conceived and is currently being implemented. The tools chosen for the OREIS system are Oracle for the relational database, SAS for data analysis and graphical representation, and Arc/INFO and ArcView for the spatial analysis and display component. Within the broad scope of ESRI`s Arc/Info software, ArcTools was chosen as the graphic user interface for inclusion of Arc/Info into OREIS. The purpose of this paper is to examine in the advantages and disadvantages of incorporating ArcTools for the presentation of Arc/INFO in the OREIS system. The immediate and mid-term development goals of the OREIS system as they relate to ArcTools will be presented. A general discussion of our experiences with the ArcTools product is also included.« less

  4. DAS: A Data Management System for Instrument Tests and Operations

    NASA Astrophysics Data System (ADS)

    Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.

    2014-05-01

    The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.

  5. Near Real-Time Automatic Data Quality Controls for the AERONET Version 3 Database: An Introduction to the New Level 1.5V Aerosol Optical Depth Data Product

    NASA Astrophysics Data System (ADS)

    Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.

    2015-12-01

    The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.

  6. [The method and application to construct experience recommendation platform of acupuncture ancient books based on data mining technology].

    PubMed

    Chen, Chuyun; Hong, Jiaming; Zhou, Weilin; Lin, Guohua; Wang, Zhengfei; Zhang, Qufei; Lu, Cuina; Lu, Lihong

    2017-07-12

    To construct a knowledge platform of acupuncture ancient books based on data mining technology, and to provide retrieval service for users. The Oracle 10 g database was applied and JAVA was selected as development language; based on the standard library and ancient books database established by manual entry, a variety of data mining technologies, including word segmentation, speech tagging, dependency analysis, rule extraction, similarity calculation, ambiguity analysis, supervised classification technology were applied to achieve text automatic extraction of ancient books; in the last, through association mining and decision analysis, the comprehensive and intelligent analysis of disease and symptom, meridians, acupoints, rules of acupuncture and moxibustion in acupuncture ancient books were realized, and retrieval service was provided for users through structure of browser/server (B/S). The platform realized full-text retrieval, word frequency analysis and association analysis; when diseases or acupoints were searched, the frequencies of meridian, acupoints (diseases) and techniques were presented from high to low, meanwhile the support degree and confidence coefficient between disease and acupoints (special acupoint), acupoints and acupoints in prescription, disease or acupoints and technique were presented. The experience platform of acupuncture ancient books based on data mining technology could be used as a reference for selection of disease, meridian and acupoint in clinical treatment and education of acupuncture and moxibustion.

  7. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    NASA Astrophysics Data System (ADS)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the algorithmic-level resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.

  8. Production of biofuels and biochemicals: in need of an ORACLE.

    PubMed

    Miskovic, Ljubisa; Hatzimanikatis, Vassily

    2010-08-01

    The engineering of cells for the production of fuels and chemicals involves simultaneous optimization of multiple objectives, such as specific productivity, extended substrate range and improved tolerance - all under a great degree of uncertainty. The achievement of these objectives under physiological and process constraints will be impossible without the use of mathematical modeling. However, the limited information and the uncertainty in the available information require new methods for modeling and simulation that will characterize the uncertainty and will quantify, in a statistical sense, the expectations of success of alternative metabolic engineering strategies. We discuss these considerations toward developing a framework for the Optimization and Risk Analysis of Complex Living Entities (ORACLE) - a computational method that integrates available information into a mathematical structure to calculate control coefficients. Copyright 2010 Elsevier Ltd. All rights reserved.

  9. False Discovery Control in Large-Scale Spatial Multiple Testing

    PubMed Central

    Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin

    2014-01-01

    Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138

  10. Building a Digital Library for Multibeam Data, Images and Documents

    NASA Astrophysics Data System (ADS)

    Miller, S. P.; Staudigel, H.; Koppers, A.; Johnson, C.; Cande, S.; Sandwell, D.; Peckman, U.; Becker, J. J.; Helly, J.; Zaslavsky, I.; Schottlaender, B. E.; Starr, S.; Montoya, G.

    2001-12-01

    The Scripps Institution of Oceanography, the UCSD Libraries and the San Diego Supercomputing Center have joined forces to establish a digital library for accessing a wide range of multibeam and marine geophysical data, to a community that ranges from the MGG researcher to K-12 outreach clients. This digital library collection will include 233 multibeam cruises with grids, plots, photographs, station data, technical reports, planning documents and publications, drawn from the holdings of the Geological Data Center and the SIO Archives. Inquiries will be made through an Ocean Exploration Console, reminiscent of a cockpit display where a multitude of data may be displayed individually or in two or three-dimensional projections. These displays will provide access to cruise data as well as global databases such as Global Topography, crustal age, and sediment thickness, thus meeting the day-to-day needs of researchers as well as educators, students, and the public. The prototype contains a few selected expeditions, and a review of the initial approach will be solicited from the user community during the poster session. The search process can be focused by a variety of constraints: geospatial (lat-lon box), temporal (e.g., since 1996), keyword (e.g., cruise, place name, PI, etc.), or expert-level (e.g., K-6 or researcher). The Storage Resource Broker (SRB) software from the SDSC manages the evolving collection as a series of distributed but related archives in various media, from shipboard data through processing and final archiving. The latest version of MB-System provides for the systematic creation of standard metadata, and for the harvesting of metadata from multibeam files. Automated scripts will be used to load the metadata catalog to enable queries with an Oracle database management system. These new efforts to bridge the gap between libraries and data archives are supported by the NSF Information Technology and National Science Digital Library (NSDL) programs, augmented by UC funds, and closely coordinated with Digital Library for Earth System Education (DLESE) activities.

  11. Introducing the Forensic Research/Reference on Genetics knowledge base, FROG-kb.

    PubMed

    Rajeevan, Haseena; Soundararajan, Usha; Pakstis, Andrew J; Kidd, Kenneth K

    2012-09-01

    Online tools and databases based on multi-allelic short tandem repeat polymorphisms (STRPs) are actively used in forensic teaching, research, and investigations. The Fst value of each CODIS marker tends to be low across the populations of the world and most populations typically have all the common STRP alleles present diminishing the ability of these systems to discriminate ethnicity. Recently, considerable research is being conducted on single nucleotide polymorphisms (SNPs) to be considered for human identification and description. However, online tools and databases that can be used for forensic research and investigation are limited. The back end DBMS (Database Management System) for FROG-kb is Oracle version 10. The front end is implemented with specific code using technologies such as Java, Java Servlet, JSP, JQuery, and GoogleCharts. We present an open access web application, FROG-kb (Forensic Research/Reference on Genetics-knowledge base, http://frog.med.yale.edu), that is useful for teaching and research relevant to forensics and can serve as a tool facilitating forensic practice. The underlying data for FROG-kb are provided by the already extensively used and referenced ALlele FREquency Database, ALFRED (http://alfred.med.yale.edu). In addition to displaying data in an organized manner, computational tools that use the underlying allele frequencies with user-provided data are implemented in FROG-kb. These tools are organized by the different published SNP/marker panels available. This web tool currently has implemented general functions possible for two types of SNP panels, individual identification and ancestry inference, and a prediction function specific to a phenotype informative panel for eye color. The current online version of FROG-kb already provides new and useful functionality. We expect FROG-kb to grow and expand in capabilities and welcome input from the forensic community in identifying datasets and functionalities that will be most helpful and useful. Thus, the structure and functionality of FROG-kb will be revised in an ongoing process of improvement. This paper describes the state as of early June 2012.

  12. Computer Aided Creativity.

    ERIC Educational Resources Information Center

    Proctor, Tony

    1988-01-01

    Explores the conceptual components of a computer program designed to enhance creative thinking and reviews software that aims to stimulate creative thinking. Discusses BRAIN and ORACLE, programs intended to aid in creative problem solving. (JOW)

  13. Alchemy of the Oracle: The Delphi Technique.

    ERIC Educational Resources Information Center

    Wilhelm, William J.

    2001-01-01

    Discusses the origins and foundations of the Delphi technique. Outlines procedures for using it in research to obtain the insights of experts. Addresses limitations of the technique. (Contains 44 references.) (SK)

  14. Efficient data management in a large-scale epidemiology research project.

    PubMed

    Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang

    2012-09-01

    This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Multicenter breast cancer collaborative registry.

    PubMed

    Sherman, Simon; Shats, Oleg; Fleissner, Elizabeth; Bascom, George; Yiee, Kevin; Copur, Mehmet; Crow, Kate; Rooney, James; Mateen, Zubeena; Ketcham, Marsha A; Feng, Jianmin; Sherman, Alexander; Gleason, Michael; Kinarsky, Leo; Silva-Lopez, Edibaldo; Edney, James; Reed, Elizabeth; Berger, Ann; Cowan, Kenneth

    2011-01-01

    The Breast Cancer Collaborative Registry (BCCR) is a multicenter web-based system that efficiently collects and manages a variety of data on breast cancer (BC) patients and BC survivors. This registry is designed as a multi-tier web application that utilizes Java Servlet/JSP technology and has an Oracle 11g database as a back-end. The BCCR questionnaire has accommodated standards accepted in breast cancer research and healthcare. By harmonizing the controlled vocabulary with the NCI Thesaurus (NCIt) or Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), the BCCR provides a standardized approach to data collection and reporting. The BCCR has been recently certified by the National Cancer Institute's Center for Biomedical Informatics and Information Technology (NCI CBIIT) as a cancer Biomedical Informatics Grid (caBIG(®)) Bronze Compatible product.The BCCR is aimed at facilitating rapid and uniform collection of critical information and biological samples to be used in developing diagnostic, prevention, treatment, and survivorship strategies against breast cancer. Currently, seven cancer institutions are participating in the BCCR that contains data on almost 900 subjects (BC patients and survivors, as well as individuals at high risk of getting BC).

  16. Multicenter Breast Cancer Collaborative Registry

    PubMed Central

    Sherman, Simon; Shats, Oleg; Fleissner, Elizabeth; Bascom, George; Yiee, Kevin; Copur, Mehmet; Crow, Kate; Rooney, James; Mateen, Zubeena; Ketcham, Marsha A.; Feng, Jianmin; Sherman, Alexander; Gleason, Michael; Kinarsky, Leo; Silva-Lopez, Edibaldo; Edney, James; Reed, Elizabeth; Berger, Ann; Cowan, Kenneth

    2011-01-01

    The Breast Cancer Collaborative Registry (BCCR) is a multicenter web-based system that efficiently collects and manages a variety of data on breast cancer (BC) patients and BC survivors. This registry is designed as a multi-tier web application that utilizes Java Servlet/JSP technology and has an Oracle 11g database as a back-end. The BCCR questionnaire has accommodated standards accepted in breast cancer research and healthcare. By harmonizing the controlled vocabulary with the NCI Thesaurus (NCIt) or Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), the BCCR provides a standardized approach to data collection and reporting. The BCCR has been recently certified by the National Cancer Institute’s Center for Biomedical Informatics and Information Technology (NCI CBIIT) as a cancer Biomedical Informatics Grid (caBIG®) Bronze Compatible product. The BCCR is aimed at facilitating rapid and uniform collection of critical information and biological samples to be used in developing diagnostic, prevention, treatment, and survivorship strategies against breast cancer. Currently, seven cancer institutions are participating in the BCCR that contains data on almost 900 subjects (BC patients and survivors, as well as individuals at high risk of getting BC). PMID:21918596

  17. Inside the Primary Classroom.

    ERIC Educational Resources Information Center

    Simon, Brian

    1980-01-01

    Presents some of the findings of the ORACLE research program (Observational Research and Classroom Learning Evaluation), a detailed observational study of teacher-student interaction, teaching styles, and management methods within a sample of primary classrooms. (Editor/SJL)

  18. AirMSPI ORACLES Terrain Data V006

    Atmospheric Science Data Center

    2018-05-05

    ... ER-2 Instrument:  AirMSPI Spatial Coverage:  United States, California, Georgia, Africa, Southern Africa, ... 10/25 meters per pixel Temporal Coverage:  07/28/2016 - 10/06/2016 Temporal Resolution:  ...

  19. Implementation of the CDC translational informatics platform--from genetic variants to the national Swedish Rheumatology Quality Register.

    PubMed

    Abugessaisa, Imad; Gomez-Cabrero, David; Snir, Omri; Lindblad, Staffan; Klareskog, Lars; Malmström, Vivianne; Tegnér, Jesper

    2013-04-02

    Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort.

  20. Implementation of the CDC translational informatics platform - from genetic variants to the national Swedish Rheumatology Quality Register

    PubMed Central

    2013-01-01

    Background Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Methods Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. Results We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Conclusions Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort. PMID:23548156

  1. Active learning for noisy oracle via density power divergence.

    PubMed

    Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi

    2013-10-01

    The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A prospective 8 week trial of nasal interfaces vs. a novel oral interface (Oracle) for treatment of obstructive sleep apnea hypopnea syndrome.

    PubMed

    Khanna, Ritu; Kline, Lewis R

    2003-07-01

    To compare efficacy, compliance rates, and side effects of a new strapless oral interface, the Oracle, with available nasal masks over 8 weeks of use for the treatment of obstructive sleep apnea hypopnea syndrome (OSAHS). A total of 38 patients with OSAHS (respiratory disturbance index (RDI) >/=15/h) were enrolled after the diagnostic polysomnogram for subsequent continuous positive airway pressure (CPAP) therapy. After randomization, therapeutic pressures during a titration study were determined for 21 patients in the oral group and 17 patients in the nasal group. Comparisons for nasal and oral interfaces were made for baseline patient characteristics, average hours of CPAP use, side effects from therapy, and among questionnaires evaluating patients' subjective responses to therapy at months 1 and 2. No significant difference was observed in the average hours of CPAP use between the oral (4.5+/-2.1; 5.5+/-2.6) and nasal groups (4.0+/-2.6; 4.8+/-2.5) for either month 1 or 2 (P>0.05). The dropout rates were similar for both groups after 8 weeks of therapy. However, patients in the nasal group had higher occurrences of side effects such as nasal congestion, dryness, and air leaks, whereas patients in the oral group experienced more oral dryness and gum pain. Oral delivery of CPAP with the Oracle is an effective and suitable alternative for patients with OSAHS.

  3. Identification of Substandard Medicines via Disproportionality Analysis of Individual Case Safety Reports.

    PubMed

    Trippe, Zahra Anita; Brendani, Bruno; Meier, Christoph; Lewis, David

    2017-04-01

    The distribution and use of substandard medicines (SSMs) is a public health concern worldwide. The detection of SSMs is currently limited to expensive large-scale assay techniques such as high-performance liquid chromatography (HPLC). Since 2013, the Pharmacovigilance Department at Novartis Pharma AG has been analyzing drug-associated adverse events related to 'product quality issues' with the aim of detecting defective medicines using spontaneous reporting. The method of identifying SSMs with spontaneous reporting was pioneered by the Monitoring Medicines project in 2011. This retrospective review was based on data from the World Health Organization (WHO) Global individual case safety report (ICSR) database VigiBase ® collected from January 2001 to December 2014. We conducted three different stratification analyses using the Multi-item Gamma Poisson Shrinker (MGPS) algorithm through the Oracle Empirica data-mining software. In total, 24 preferred terms (PTs) from the Medical Dictionary for Regulatory Activities (MedDRA ® ) were used to identify poor-quality medicines. To identify potential SSMs for further evaluation, a cutoff of 2.0 for EB05, the lower 95% interval of the empirical Bayes geometric mean (EBGM) was applied. We carried out a literature search for advisory letters related to defective medicinal products to validate our findings. Furthermore, we aimed to assess whether we could confirm two SSMs first identified by the Uppsala Monitoring Centre (UMC) with our stratification method. The analysis of ICSRs based on the specified selection criteria and threshold yielded 2506 hits including medicinal products with an excess of reports of product quality defects relative to other medicines in the database. Further investigations and a pilot study in five authorized medicinal products (proprietary and generic) licensed by a single marketing authorization holder, containing valsartan, methylphenidate, rivastigmine, clozapine, or carbamazepine, were performed. This resulted in an output of 23 potential SSMs. The literature search identified two communications issued to health professionals concerning a substandard rivastigmine patch, which validated our initial findings. Furthermore, we identified excess reporting of product quality issues with an ethinyl estradiol/norgestrel combination and with salbutamol. These were categorized as confirmed clusters of substandard/spurious/falsely labelled/falsified/counterfeit (SSFFC) medical products by the UMC in 2014. This study illustrates the value of data mining of spontaneous adverse event reports and the applicability of disproportionality analysis to identify potential SSMs.

  4. Organisational capacity and its relationship to research use in six Australian health policy agencies

    PubMed Central

    Makkar, Steve R.; Haynes, Abby; Williamson, Anna; Redman, Sally

    2018-01-01

    There are calls for policymakers to make greater use of research when formulating policies. Therefore, it is important that policy organisations have a range of tools and systems to support their staff in using research in their work. The aim of the present study was to measure the extent to which a range of tools and systems to support research use were available within six Australian agencies with a role in health policy, and examine whether this was related to the extent of engagement with, and use of research in policymaking by their staff. The presence of relevant systems and tools was assessed via a structured interview called ORACLe which is conducted with a senior executive from the agency. To measure research use, four policymakers from each agency undertook a structured interview called SAGE, which assesses and scores the extent to which policymakers engaged with (i.e., searched for, appraised, and generated) research, and used research in the development of a specific policy document. The results showed that all agencies had at least a moderate range of tools and systems in place, in particular policy development processes; resources to access and use research (such as journals, databases, libraries, and access to research experts); processes to generate new research; and mechanisms to establish relationships with researchers. Agencies were less likely, however, to provide research training for staff and leaders, or to have evidence-based processes for evaluating existing policies. For the majority of agencies, the availability of tools and systems was related to the extent to which policymakers engaged with, and used research when developing policy documents. However, some agencies did not display this relationship, suggesting that other factors, namely the organisation’s culture towards research use, must also be considered. PMID:29513669

  5. Organisational capacity and its relationship to research use in six Australian health policy agencies.

    PubMed

    Makkar, Steve R; Haynes, Abby; Williamson, Anna; Redman, Sally

    2018-01-01

    There are calls for policymakers to make greater use of research when formulating policies. Therefore, it is important that policy organisations have a range of tools and systems to support their staff in using research in their work. The aim of the present study was to measure the extent to which a range of tools and systems to support research use were available within six Australian agencies with a role in health policy, and examine whether this was related to the extent of engagement with, and use of research in policymaking by their staff. The presence of relevant systems and tools was assessed via a structured interview called ORACLe which is conducted with a senior executive from the agency. To measure research use, four policymakers from each agency undertook a structured interview called SAGE, which assesses and scores the extent to which policymakers engaged with (i.e., searched for, appraised, and generated) research, and used research in the development of a specific policy document. The results showed that all agencies had at least a moderate range of tools and systems in place, in particular policy development processes; resources to access and use research (such as journals, databases, libraries, and access to research experts); processes to generate new research; and mechanisms to establish relationships with researchers. Agencies were less likely, however, to provide research training for staff and leaders, or to have evidence-based processes for evaluating existing policies. For the majority of agencies, the availability of tools and systems was related to the extent to which policymakers engaged with, and used research when developing policy documents. However, some agencies did not display this relationship, suggesting that other factors, namely the organisation's culture towards research use, must also be considered.

  6. The GEOSS Clearinghouse based on the GeoNetwork opensource

    NASA Astrophysics Data System (ADS)

    Liu, K.; Yang, C.; Wu, H.; Huang, Q.

    2010-12-01

    The Global Earth Observation System of Systems (GEOSS) is established to support the study of the Earth system in a global community. It provides services for social management, quick response, academic research, and education. The purpose of GEOSS is to achieve comprehensive, coordinated and sustained observations of the Earth system, improve monitoring of the state of the Earth, increase understanding of Earth processes, and enhance prediction of the behavior of the Earth system. In 2009, GEO called for a competition for an official GEOSS clearinghouse to be selected as a source to consolidating catalogs for Earth observations. The Joint Center for Intelligent Spatial Computing at George Mason University worked with USGS to submit a solution based on the open-source platform - GeoNetwork. In the spring of 2010, the solution is selected as the product for GEOSS clearinghouse. The GEOSS Clearinghouse is a common search facility for the Intergovernmental Group on Ea rth Observation (GEO). By providing a list of harvesting functions in Business Logic, GEOSS clearinghouse can collect metadata from distributed catalogs including other GeoNetwork native nodes, webDAV/sitemap/WAF, catalog services for the web (CSW)2.0, GEOSS Component and Service Registry (http://geossregistries.info/), OGC Web Services (WCS, WFS, WMS and WPS), OAI Protocol for Metadata Harvesting 2.0, ArcSDE Server and Local File System. Metadata in GEOSS clearinghouse are managed in a database (MySQL, Postgresql, Oracle, or MckoiDB) and an index of the metadata is maintained through Lucene engine. Thus, EO data, services, and related resources can be discovered and accessed. It supports a variety of geospatial standards including CSW and SRU for search, FGDC and ISO metadata, and WMS related OGC standards for data access and visualization, as linked from the metadata.

  7. Data Foundry: Data Warehousing and Integration for Scientific Data Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musick, R.; Critchlow, T.; Ganesh, M.

    2000-02-29

    Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data intomore » a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.« less

  8. Echinoids of the Kerguelen Plateau - occurrence data and environmental setting for past, present, and future species distribution modelling.

    PubMed

    Guillaumot, Charlène; Martin, Alexis; Fabri-Ruiz, Salomé; Eléaume, Marc; Saucède, Thomas

    2016-01-01

    The present dataset provides a case study for species distribution modelling (SDM) and for model testing in a poorly documented marine region. The dataset includes spatially-explicit data for echinoid (Echinodermata: Echinoidea) distribution. Echinoids were collected during oceanographic campaigns led around the Kerguelen Plateau (+63°/+81°E; -46°/-56°S) since 1872. In addition to the identification of collection specimens from historical cruises, original data from the recent campaigns POKER II (2010) and PROTEKER 2 to 4 (2013-2015) are also provided. In total, five families, ten genera, and 12 echinoid species are recorded in the region of the Kerguelen Plateau. The dataset is complemented with environmental descriptors available and relevant for echinoid ecology and SDM. The environmental data was compiled from different sources and was modified to suit the geographic extent of the Kerguelen Plateau, using scripts developed with the R language (R Core Team 2015). Spatial resolution was set at a common 0.1° pixel resolution. Mean seafloor and sea surface temperatures, salinity and their amplitudes, all derived from the World Ocean Database (Boyer et al. 2013) are made available for the six following decades: 1955-1964, 1965-1974, 1975-1984, 1985-1994, 1995-2004, 2005-2012. Future projections are provided for several parameters: they were modified from the Bio-ORACLE database (Tyberghein et al. 2012). They are based on three IPCC scenarii (B1, AIB, A2) for years 2100 and 2200 (IPCC, 4 th report).

  9. The Deference Due the Oracle: Computerized Text Analysis in a Basic Writing Class.

    ERIC Educational Resources Information Center

    Otte, George

    1989-01-01

    Describes how a computerized text analysis program can help students discover error patterns in their writing, and notes how students' responses to analyses can reduce errors and improve their writing. (MM)

  10. 17th Floor: A pedagogical oracle from/with Audre Lorde.

    PubMed

    Gumbs, Alexis Pauline

    2017-10-02

    In 1974, warrior poet mother Audre Lorde published the poem "Blackstudies," a freeform dream villanelle about her complicated experience as a Black lesbian feminist English professor at the City University of New York during the dynamic period when students rose up in protest. The university granted open admissions, and cultural nationalists who taught at City University worked to create a Black Studies program. In the poem, she describes her vantage point at this particular historical and pedagogical moment from the seventeenth floor within a dreamscape where she navigates the stereotypes, silences, and urgencies that shaped her experience as an educator. 17 th Floor is a poetic oracle that contextualizes the ongoing work of "Blackstudies" (the poem and the practice), and for this reason, it should be activated as a resource for current Black and Brown lesbian educators and everyone who brings complexity and nuance to their teaching settings, their students, each other, and the world more broadly.

  11. IAC - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an engineering analysis program which contains a general purpose library of over 600 subroutines for numerical analysis. 5) Graphics - The graphics package IPLOT is included in IAC. IPLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc. Either DI3000 or PLOT-10 graphics software is required for full graphic capability. In addition to these analysis tools, IAC 2.5 contains an IGES interface which allows the user to read arbitrary IGES files into an IAC database and to edit and output new IGES files. IAC is available by license for a period of 10 years to approved U.S. licensees. The licensed program product includes one set of supporting documentation. Additional copies may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The program is structured to allow users to easily delete those program capabilities and "how to" examples they do not want in order to reduce the size of the package. The basic central memory requirement for IAC is approximately 750KB. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. The development of level 2.5 of IAC was completed in 1989.

  12. Observations of Aerosol-Radiation-Cloud Interactions in the South-East Atlantic: First Results from the ORACLES Deployments in 2016 and 2017

    NASA Technical Reports Server (NTRS)

    Redemann, Jens; Wood, R.; Zuidema, P.; Diner, D.; Van Harten, G.; Xu, F.; Cairns, B.; Knobelspiesse, K.; Segal Rozenhaimer, M.

    2017-01-01

    Southern Africa produces almost a third of the Earths biomass burning (BB) aerosol particles. Particles lofted into the mid-troposphere are transported westward over the South-East (SE) Atlantic, home to one of the three permanent subtropical stratocumulus (Sc) cloud decks in the world. The SE Atlantic stratocumulus deck interacts with the dense layers of BB aerosols that initially overlay the cloud deck, but later subside and often mix into the clouds. These interactions include adjustments to aerosol-induced solar heating and microphysical effects, and their global representation in climate models remains one of the largest uncertainties in estimates of future climate. Hence, new observations over the SE Atlantic have significant implications for regional and global climate change predictions.The low-level clouds in the SE Atlantic have limited vertical extent and therefore present favorable conditions for their exploration with remote sensing. On the other hand, the normal coexistence of BB aerosols and Sc clouds in the same scene also presents significant challenges to conventional remote sensing techniques. We describe first results from NASAs airborne ORACLES (ObseRvations of Aerosols Above Clouds and Their IntEractionS) deployments in September 2016 and August 2017. We emphasize the unique role of polarimetric observations by two instruments, the Research Scanning Polarimeter (RSP) and the Airborne Multi-angle SpectroPolarimeter Imager (AirMSPI), and describe how these instruments help address specific ORACLES science objectives. Initial assessments of polarimetric observation accuracy for key cloud and aerosol properties will be presented, in as far as the preliminary nature of measurements permits.

  13. Using a data base management system for modelling SSME test history data

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1985-01-01

    The usefulness of a data base management system (DBMS) for modelling historical test data for the complete series of static test firings for the Space Shuttle Main Engine (SSME) was assessed. From an analysis of user data base query requirements, it became clear that a relational DMBS which included a relationally complete query language would permit a model satisfying the query requirements. Representative models and sample queries are discussed. A list of environment-particular evaluation criteria for the desired DBMS was constructed; these criteria include requirements in the areas of user-interface complexity, program independence, flexibility, modifiability, and output capability. The evaluation process included the construction of several prototype data bases for user assessement. The systems studied, representing the three major DBMS conceptual models, were: MIRADS, a hierarchical system; DMS-1100, a CODASYL-based network system; ORACLE, a relational system; and DATATRIEVE, a relational-type system.

  14. Tour Robot Dance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleary, Geoff

    2014-09-08

    This program exercises the robotic elements in Oracle Storage Tek tape libraries. This is useful for two known cases: 1.) shaking out marginal or new hardware by ensuring hardware robustness under high-duty usage. 2.) ensuring tape libraries are visually interesting during datacenter tours

  15. Classroom Constructivism.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1995-01-01

    Constructivism, which holds that knowledge is created out of each individual's own experience, is recapturing researchers' attention. To constructivists, teachers are not omniscient oracles, but nutritionists providing an environment for children to grow their own knowledge. Students might learn division by planning a field trip instead of…

  16. FastLane: An Agile Congestion Signaling Mechanism for Improving Datacenter Performance

    DTIC Science & Technology

    2013-05-20

    Cloudera, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft, NetApp, Oracle, Quanta, Samsung, Splunk, VMware and Yahoo...Web Services, Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloud- era, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft

  17. Telereference Services: The Potential for Libraries.

    ERIC Educational Resources Information Center

    Rice, James

    1983-01-01

    Discussion of applications of teleconferencing, technology which allows information delivery and communication to take place through a television system, highlights three types of systems (videotext, teletext, fully interactive television); systems in use (CEEFAX, Oracle, Prestel, Telidon, Viewtron); public resistance; and telereference services…

  18. CONTRACT ADMINISTRATIVE TRACKING SYSTEM (CATS)

    EPA Science Inventory

    The Contract Administrative Tracking System (CATS) was developed in response to an ORD NHEERL, Mid-Continent Ecology Division (MED)-recognized need for an automated tracking and retrieval system for Cost Reimbursable Level of Effort (CR/LOE) Contracts. CATS is an Oracle-based app...

  19. Toward a workbench for rodent brain image data: systems architecture and design.

    PubMed

    Moene, Ivar A; Subramaniam, Shankar; Darin, Dmitri; Leergaard, Trygve B; Bjaalie, Jan G

    2007-01-01

    We present a novel system for storing and manipulating microscopic images from sections through the brain and higher-level data extracted from such images. The system is designed and built on a three-tier paradigm and provides the research community with a web-based interface for facile use in neuroscience research. The Oracle relational database management system provides the ability to store a variety of objects relevant to the images and provides the framework for complex querying of data stored in the system. Further, the suite of applications intimately tied into the infrastructure in the application layer provide the user the ability not only to query and visualize the data, but also to perform analysis operations based on the tools embedded into the system. The presentation layer uses extant protocols of the modern web browser and this provides ease of use of the system. The present release, named Functional Anatomy of the Cerebro-Cerebellar System (FACCS), available through The Rodent Brain Workbench (http:// rbwb.org/), is targeted at the functional anatomy of the cerebro-cerebellar system in rats, and holds axonal tracing data from these projections. The system is extensible to other circuits and projections and to other categories of image data and provides a unique environment for analysis of rodent brain maps in the context of anatomical data. The FACCS application assumes standard animal brain atlas models and can be extended to future models. The system is available both for interactive use from a remote web-browser client as well as for download to a local server machine.

  20. Analysis of the Finite Precision s-Step Biconjugate Gradient Method

    DTIC Science & Technology

    2014-03-13

    Center for Future Architecture Research, a member of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA, and ASPIRE Lab...industrial sponsors and affiliates Intel, Google, Nokia, NVIDIA , Oracle, and Samsung. Any opinions, findings, conclusions, or recommendations in this

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grahn, D.; Wright, B.J.; Carnes, B.A.

    A research reactor for exclusive use in experimental radiobiology was designed and built at Argonne National Laboratory in the 1960`s. It was located in a special addition to Building 202, which housed the Division of Biological and Medical Research. Its location assured easy access for all users to the animal facilities, and it was also near the existing gamma-irradiation facilities. The water-cooled, heterogeneous 200-kW(th) reactor, named JANUS, became the focal point for a range of radiobiological studies gathered under the rubic of {open_quotes}the JANUS program{close_quotes}. The program ran from about 1969 to 1992 and included research at all levels ofmore » biological organization, from subcellular to organism. More than a dozen moderate- to large-scale studies with the B6CF{sub 1} mouse were carried out; these focused on the late effects of whole-body exposure to gamma rays or fission neutrons, in matching exposure regimes. In broad terms, these studies collected data on survival and on the pathology observed at death. A deliberate effort was made to establish the cause of death. This archieve describes these late-effects studies and their general findings. The database includes exposure parameters, time of death, and the gross pathology and histopathology in codified form. A series of appendices describes all pathology procedures and codes, treatment or irradiation codes, and the manner in which the data can be accessed in the ORACLE database management system. A series of tables also presents summaries of the individual experiments in terms of radiation quality, sample sizes at entry, mean survival times by sex, and number of gross pathology and histopathology records.« less

  2. eCOMPAGT integrates mtDNA: import, validation and export of mitochondrial DNA profiles for population genetics, tumour dynamics and genotype-phenotype association studies.

    PubMed

    Weissensteiner, Hansi; Schönherr, Sebastian; Specht, Günther; Kronenberg, Florian; Brandstätter, Anita

    2010-03-09

    Mitochondrial DNA (mtDNA) is widely being used for population genetics, forensic DNA fingerprinting and clinical disease association studies. The recent past has uncovered severe problems with mtDNA genotyping, not only due to the genotyping method itself, but mainly to the post-lab transcription, storage and report of mtDNA genotypes. eCOMPAGT, a system to store, administer and connect phenotype data to all kinds of genotype data is now enhanced by the possibility of storing mtDNA profiles and allowing their validation, linking to phenotypes and export as numerous formats. mtDNA profiles can be imported from different sequence evaluation programs, compared between evaluations and their haplogroup affiliations stored. Furthermore, eCOMPAGT has been improved in its sophisticated transparency (support of MySQL and Oracle), security aspects (by using database technology) and the option to import, manage and store genotypes derived from various genotyping methods (SNPlex, TaqMan, and STRs). It is a software solution designed for project management, laboratory work and the evaluation process all-in-one. The extended mtDNA version of eCOMPAGT was designed to enable error-free post-laboratory data handling of human mtDNA profiles. This software is suited for small to medium-sized human genetic, forensic and clinical genetic laboratories. The direct support of MySQL and the improved database security options render eCOMPAGT a powerful tool to build an automated workflow architecture for several genotyping methods. eCOMPAGT is freely available at http://dbis-informatik.uibk.ac.at/ecompagt.

  3. eCOMPAGT integrates mtDNA: import, validation and export of mitochondrial DNA profiles for population genetics, tumour dynamics and genotype-phenotype association studies

    PubMed Central

    2010-01-01

    Background Mitochondrial DNA (mtDNA) is widely being used for population genetics, forensic DNA fingerprinting and clinical disease association studies. The recent past has uncovered severe problems with mtDNA genotyping, not only due to the genotyping method itself, but mainly to the post-lab transcription, storage and report of mtDNA genotypes. Description eCOMPAGT, a system to store, administer and connect phenotype data to all kinds of genotype data is now enhanced by the possibility of storing mtDNA profiles and allowing their validation, linking to phenotypes and export as numerous formats. mtDNA profiles can be imported from different sequence evaluation programs, compared between evaluations and their haplogroup affiliations stored. Furthermore, eCOMPAGT has been improved in its sophisticated transparency (support of MySQL and Oracle), security aspects (by using database technology) and the option to import, manage and store genotypes derived from various genotyping methods (SNPlex, TaqMan, and STRs). It is a software solution designed for project management, laboratory work and the evaluation process all-in-one. Conclusions The extended mtDNA version of eCOMPAGT was designed to enable error-free post-laboratory data handling of human mtDNA profiles. This software is suited for small to medium-sized human genetic, forensic and clinical genetic laboratories. The direct support of MySQL and the improved database security options render eCOMPAGT a powerful tool to build an automated workflow architecture for several genotyping methods. eCOMPAGT is freely available at http://dbis-informatik.uibk.ac.at/ecompagt. PMID:20214782

  4. Implementing a Community-Driven Cyberinfrastructure Platform for the Paleo- and Rock Magnetic Scientific Fields that Generalizes to Other Geoscience Disciplines

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Jarboe, N.; Koppers, A. A.; Tauxe, L.; Constable, C.

    2013-12-01

    EarthRef.org is a geoscience umbrella website for several databases and data and model repository portals. These portals, unified in the mandate to preserve their respective data and promote scientific collaboration in their fields, are also disparate in their schemata. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo- and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples and relies on a partially strict subsumptive hierarchical data model. The Geochemical Earth Reference Model (http://earthref.org/GERM/) portal focuses on the chemical characterization of the Earth and relies on two data schemata: a repository of peer-reviewed reservoir geochemistry, and a database of partition coefficients for rocks, minerals, and elements. The Seamount Biogeosciences Network (http://earthref.org/SBN/) encourages the collaboration between the diverse disciplines involved in seamount research and includes the Seamount Catalog (http://earthref.org/SC/) of bathymetry and morphology. All of these portals also depend on the EarthRef Reference Database (http://earthref.org/ERR/) for publication reference metadata and the EarthRef Digital Archive (http://earthref.org/ERDA/), a generic repository of data objects and their metadata. The development of the new MagIC Search Interface (http://earthref.org/MagIC/search/) centers on a reusable platform designed to be flexible enough for largely heterogeneous datasets and to scale up to datasets with tens of millions of records. The HTML5 web application and Oracle 11g database residing at the San Diego Supercomputer Center (SDSC) support the online contribution and editing of complex datasets in a spreadsheet environment and the browsing and filtering of these contributions in the context of thousands of other datasets. EarthRef.org is in the process of implementing this platform across all of its data portals in spite of the wide variety of data schemata and is dedicated to serving the geoscience community with as little effort from the end-users as possible.

  5. Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.; Thurman, David A.

    1999-01-01

    AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the case base, the user interface (including the category structure), interface with the current DAAC Help System, the development of tools to add new cases, and user testing and evaluation at (perhaps) the Goddard DAAC.

  6. Migration to Current Open Source Technologies by MagIC Enables a More Responsive Website, Quicker Development Times, and Increased Community Engagement

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Koppers, A.; Constable, C.; Tauxe, L.; Jonestrask, L.

    2017-12-01

    The Magnetics Information Consortium (MagIC) supports an online database for the paleo, geo, and rock magnetic communities ( https://earthref.org/MagIC ). Researchers can upload data into the archive and download data as selected with a sophisticated search system. MagIC has completed the transition from an Oracle backed, Perl based, server oriented website to an ElasticSearch backed, Meteor based thick client website technology stack. Using JavaScript on both the sever and the client enables increased code reuse and allows easy offloading many computational operations to the client for faster response. On-the-fly data validation, column header suggestion, and spreadsheet online editing are some new features available with the new system. The 3.0 data model, method codes, and vocabulary lists can be browsed via the MagIC website and more easily updated. Source code for MagIC is publicly available on GitHub ( https://github.com/earthref/MagIC ). The MagIC file format is natively compatible with the PmagPy ( https://github.com/PmagPy/PmagPy) paleomagnetic analysis software. MagIC files can now be downloaded from the database and viewed and interpreted in the PmagPy GUI based tool, pmag_gui. Changes or interpretations of the data can then be saved by pmag_gui in the MagIC 3.0 data format and easily uploaded to the MagIC database. The rate of new contributions to the database has been increasing with many labs contributing measurement level data for the first time in the last year. Over a dozen file format conversion scripts are available for translating non-MagIC measurement data files into the MagIC format for easy uploading. We will continue to work with more labs until the whole community has a manageable workflow for contributing their measurement level data. MagIC will continue to provide a global repository for archiving and retrieving paleomagnetic and rock magnetic data and, with the new system in place, be able to more quickly respond to the community's requests for changes and improvements.

  7. Introducing the Forensic Research/Reference on Genetics knowledge base, FROG-kb

    PubMed Central

    2012-01-01

    Background Online tools and databases based on multi-allelic short tandem repeat polymorphisms (STRPs) are actively used in forensic teaching, research, and investigations. The Fst value of each CODIS marker tends to be low across the populations of the world and most populations typically have all the common STRP alleles present diminishing the ability of these systems to discriminate ethnicity. Recently, considerable research is being conducted on single nucleotide polymorphisms (SNPs) to be considered for human identification and description. However, online tools and databases that can be used for forensic research and investigation are limited. Methods The back end DBMS (Database Management System) for FROG-kb is Oracle version 10. The front end is implemented with specific code using technologies such as Java, Java Servlet, JSP, JQuery, and GoogleCharts. Results We present an open access web application, FROG-kb (Forensic Research/Reference on Genetics-knowledge base, http://frog.med.yale.edu), that is useful for teaching and research relevant to forensics and can serve as a tool facilitating forensic practice. The underlying data for FROG-kb are provided by the already extensively used and referenced ALlele FREquency Database, ALFRED (http://alfred.med.yale.edu). In addition to displaying data in an organized manner, computational tools that use the underlying allele frequencies with user-provided data are implemented in FROG-kb. These tools are organized by the different published SNP/marker panels available. This web tool currently has implemented general functions possible for two types of SNP panels, individual identification and ancestry inference, and a prediction function specific to a phenotype informative panel for eye color. Conclusion The current online version of FROG-kb already provides new and useful functionality. We expect FROG-kb to grow and expand in capabilities and welcome input from the forensic community in identifying datasets and functionalities that will be most helpful and useful. Thus, the structure and functionality of FROG-kb will be revised in an ongoing process of improvement. This paper describes the state as of early June 2012. PMID:22938150

  8. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  9. Robust Variable Selection with Exponential Squared Loss

    PubMed Central

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-01-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996

  10. The efficacy of cladribine tablets in CIS patients retrospectively assigned the diagnosis of MS using modern criteria: Results from the ORACLE-MS study.

    PubMed

    Freedman, Mark S; Leist, Thomas P; Comi, Giancarlo; Cree, Bruce Ac; Coyle, Patricia K; Hartung, Hans-Peter; Vermersch, Patrick; Damian, Doris; Dangond, Fernando

    2017-01-01

    Multiple sclerosis (MS) diagnostic criteria have changed since the ORACLE-MS study was conducted; 223 of 616 patients (36.2%) would have met the diagnosis of MS vs clinically isolated syndrome (CIS) using the newer criteria. The objective of this paper is to assess the effect of cladribine tablets in patients with a first clinical demyelinating attack fulfilling newer criteria (McDonald 2010) for MS vs CIS. A post hoc analysis for subgroups of patients retrospectively classified as fulfilling or not fulfilling newer criteria at the first clinical demyelinating attack was conducted. Cladribine tablets 3.5 mg/kg ( n  = 68) reduced the risk of next attack or three-month confirmed Expanded Disability Status Scale (EDSS) worsening by 74% vs placebo ( n  = 72); p  = 0.0009 in patients meeting newer criteria for MS at baseline. Cladribine tablets 5.25 mg/kg ( n  = 83) reduced the risk of next attack or three-month confirmed EDSS worsening by 37%, but nominal significance was not reached ( p  = 0.14). In patients who were still CIS after applying newer criteria, cladribine tablets 3.5 mg/kg ( n  = 138) reduced the risk of conversion to clinically definite multiple sclerosis (CDMS) by 63% vs placebo ( n  = 134); p  = 0.0003. Cladribine tablets 5.25 mg/kg ( n  = 121) reduced the risk of conversion by 75% vs placebo ( n  = 134); p  < 0.0001. Regardless of the criteria used to define CIS or MS, 3.5 mg/kg cladribine tablets are effective in patients with a first clinical demyelinating attack. ClinicalTrials.gov registration: The ORACLE-MS study (NCT00725985).

  11. Cloud Condensation Nuclei Measurements During the First Year of the ORACLES Study

    NASA Astrophysics Data System (ADS)

    Kacarab, M.; Howell, S. G.; Wood, R.; Redemann, J.; Nenes, A.

    2016-12-01

    Aerosols have significant impacts on air quality and climate. Their ability to scatter and absorb radiation and to act as cloud condensation nuclei (CCN) plays a very important role in the global climate. Biomass burning organic aerosol (BBOA) can drastically elevate the concentration of CCN in clouds, but the response in droplet number may be strongly suppressed (or even reversed) owing to low supersaturations that may develop from the strong competition of water vapor (Bougiatioti et al. 2016). Understanding and constraining the magnitude of droplet response to biomass burning plumes is an important component of the aerosol-cloud interaction problem. The southeastern Atlantic (SEA) cloud deck provides a unique opportunity to study these cloud-BBOA interactions for marine stratocumulus, as it is overlain by a large, optically thick biomass burning aerosol plume from Southern Africa during the burning season. The interaction between these biomass burning aerosols and the SEA cloud deck is being investigated in the NASA ObseRvations of Aerosols above Clouds and their intEractionS (ORACLES) study. The CCN activity of aerosol around the SEA cloud deck and associated biomass burning plume was evaluated during the first year of the ORACLES study with direct measurements of CCN concentration, aerosol size distribution and composition onboard the NASA P-3 aircraft during August and September of 2016. Here we present analysis of the observed CCN activity of the BBOA aerosol in and around the SEA cloud deck and its relationship to aerosol size, chemical composition, and plume mixing and aging. We also evaluate the predicted and observed droplet number sensitivity to the aerosol fluctuations and quantify, using the data, the drivers of droplet number variability (vertical velocity or aerosol properties) as a function of biomass burning plume characteristics.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Edward J., Jr.; Henry, Karen Lynne

    Sandia National Laboratories develops technologies to: (1) sustain, modernize, and protect our nuclear arsenal (2) Prevent the spread of weapons of mass destruction; (3) Provide new capabilities to our armed forces; (4) Protect our national infrastructure; (5) Ensure the stability of our nation's energy and water supplies; and (6) Defend our nation against terrorist threats. We identified the need for a single overarching Integrated Workplace Management System (IWMS) that would enable us to focus on customer missions and improve FMOC processes. Our team selected highly configurable commercial-off-the-shelf (COTS) software with out-of-the-box workflow processes that integrate strategic planning, project management, facilitymore » assessments, and space management, and can interface with existing systems, such as Oracle, PeopleSoft, Maximo, Bentley, and FileNet. We selected the Integrated Workplace Management System (IWMS) from Tririga, Inc. Facility Management System (FMS) Benefits are: (1) Create a single reliable source for facility data; (2) Improve transparency with oversight organizations; (3) Streamline FMOC business processes with a single, integrated facility-management tool; (4) Give customers simple tools and real-time information; (5) Reduce indirect costs; (6) Replace approximately 30 FMOC systems and 60 homegrown tools (such as Microsoft Access databases); and (7) Integrate with FIMS.« less

  13. A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality

    NASA Astrophysics Data System (ADS)

    Wang, Manyi; Liu, Chaoshun; Gao, Wei

    2014-10-01

    An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.

  14. Flexible data registration and automation in semiconductor production

    NASA Astrophysics Data System (ADS)

    Dudde, Ralf; Staudt-Fischbach, Peter; Kraemer, Benedict

    1997-08-01

    The need for cost reduction and flexibility in semiconductor production will result in a wider application of computer based automation systems. With the setup of a new and advanced CMOS semiconductor line in the Fraunhofer Institute for Silicon Technology [ISIT, Itzehoe (D)] a new line information system (LIS) was introduced based on an advanced model for the underlying data structure. This data model was implemented into an ORACLE-RDBMS. A cellworks based system (JOSIS) was used for the integration of the production equipment, communication and automated database bookings and information retrievals. During the ramp up of the production line this new system is used for the fab control. The data model and the cellworks based system integration is explained. This system enables an on-line overview of the work in progress in the fab, lot order history and equipment status and history. Based on this figures improved production and cost monitoring and optimization is possible. First examples of the information gained by this system are presented. The modular set-up of the LIS system will allow easy data exchange with additional software tools like scheduler, different fab control systems like PROMIS and accounting systems like SAP. Modifications necessary for the integration of PROMIS are described.

  15. Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms

    NASA Astrophysics Data System (ADS)

    Kumar, K.; Ledoux, H.; Stoter, J.

    2016-06-01

    Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.

  16. Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2018-03-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.

  17. Entropy-functional-based online adaptive decision fusion framework with application to wildfire detection in video.

    PubMed

    Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis

    2012-05-01

    In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.

  18. Variable Selection for Support Vector Machines in Moderately High Dimensions

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916

  19. A bittersweet story: the true nature of the laurel of the Oracle of Delphi.

    PubMed

    Harissis, Haralampos V

    2014-01-01

    It is known from ancient sources that "laurel," identified with sweet bay, was used at the ancient Greek oracle of Delphi. The Pythia, the priestess who spoke the prophecies, purportedly used laurel as a means to inspire her divine frenzy. However, the clinical symptoms of the Pythia, as described in ancient sources, cannot be attributed to the use of sweet bay, which is harmless. A review of contemporary toxicological literature indicates that it is oleander that causes symptoms similar to those of the Pythia, while a closer examination of ancient literary texts indicates that oleander was often included under the generic term laurel. It is therefore likely that it was oleander, not sweet bay, that the Pythia used before the oracular procedure. This explanation could also shed light on other ancient accounts regarding the alleged spirit and chasm of Delphi, accounts that have been the subject of intense debate and interdisciplinary research for the last hundred years.

  20. Reliability of programs specified with equational specifications

    NASA Astrophysics Data System (ADS)

    Nikolik, Borislav

    Ultrareliability is desirable (and sometimes a demand of regulatory authorities) for safety-critical applications, such as commercial flight-control programs, medical applications, nuclear reactor control programs, etc. A method is proposed, called the Term Redundancy Method (TRM), for obtaining ultrareliable programs through specification-based testing. Current specification-based testing schemes need a prohibitively large number of testcases for estimating ultrareliability. They assume availability of an accurate program-usage distribution prior to testing, and they assume the availability of a test oracle. It is shown how to obtain ultrareliable programs (probability of failure near zero) with a practical number of testcases, without accurate usage distribution, and without a test oracle. TRM applies to the class of decision Abstract Data Type (ADT) programs specified with unconditional equational specifications. TRM is restricted to programs that do not exceed certain efficiency constraints in generating testcases. The effectiveness of TRM in failure detection and recovery is demonstrated on formulas from the aircraft collision avoidance system TCAS.

  1. LimsPortal and BonsaiLIMS: development of a lab information management system for translational medicine

    PubMed Central

    2011-01-01

    Background Laboratory Information Management Systems (LIMS) are an increasingly important part of modern laboratory infrastructure. As typically very sophisticated software products, LIMS often require considerable resources to select, deploy and maintain. Larger organisations may have access to specialist IT support to assist with requirements elicitation and software customisation, however smaller groups will often have limited IT support to perform the kind of iterative development that can resolve the difficulties that biologists often have when specifying requirements. Translational medicine aims to accelerate the process of treatment discovery by bringing together multiple disciplines to discover new approaches to treating disease, or novel applications of existing treatments. The diverse set of disciplines and complexity of processing procedures involved, especially with the use of high throughput technologies, bring difficulties in customizing a generic LIMS to provide a single system for managing sample related data within a translational medicine research setting, especially where limited IT support is available. Results We have designed and developed a LIMS, BonsaiLIMS, around a very simple data model that can be easily implemented using a variety of technologies, and can be easily extended as specific requirements dictate. A reference implementation using Oracle 11 g database and the Python framework, Django is presented. Conclusions By focusing on a minimal feature set and a modular design we have been able to deploy the BonsaiLIMS system very quickly. The benefits to our institute have been the avoidance of the prolonged implementation timescales, budget overruns, scope creep, off-specifications and user fatigue issues that typify many enterprise software implementations. The transition away from using local, uncontrolled records in spreadsheet and paper formats to a centrally held, secured and backed-up database brings the immediate benefits of improved data visibility, audit and overall data quality. The open-source availability of this software allows others to rapidly implement a LIMS which in itself might sufficiently address user requirements. In situations where this software does not meet requirements, it can serve to elicit more accurate specifications from end-users for a more heavyweight LIMS by acting as a demonstrable prototype. PMID:21569484

  2. A Grid Metadata Service for Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.

  3. CruiseViewer: SIOExplorer Graphical Interface to Metadata and Archives.

    NASA Astrophysics Data System (ADS)

    Sutton, D. W.; Helly, J. J.; Miller, S. P.; Chase, A.; Clark, D.

    2002-12-01

    We are introducing "CruiseViewer" as a prototype graphical interface for the SIOExplorer digital library project, part of the overall NSF National Science Digital Library (NSDL) effort. When complete, CruiseViewer will provide access to nearly 800 cruises, as well as 100 years of documents and images from the archives of the Scripps Institution of Oceanography (SIO). The project emphasizes data object accessibility, a rich metadata format, efficient uploading methods and interoperability with other digital libraries. The primary function of CruiseViewer is to provide a human interface to the metadata database and to storage systems filled with archival data. The system schema is based on the concept of an "arbitrary digital object" (ADO). Arbitrary in that if the object can be stored on a computer system then SIOExplore can manage it. Common examples are a multibeam swath bathymetry file, a .pdf cruise report, or a tar file containing all the processing scripts used on a cruise. We require a metadata file for every ADO in an ascii "metadata interchange format" (MIF), which has proven to be highly useful for operability and extensibility. Bulk ADO storage is managed using the Storage Resource Broker, SRB, data handling middleware developed at the San Diego Supercomputer Center that centralizes management and access to distributed storage devices. MIF metadata are harvested from several sources and housed in a relational (Oracle) database. For CruiseViewer, cgi scripts resident on an Apache server are the primary communication and service request handling tools. Along with the CruiseViewer java application, users can query, access and download objects via a separate method that operates through standard web browsers, http://sioexplorer.ucsd.edu. Both provide the functionability to query and view object metadata, and select and download ADOs. For the CruiseViewer application Java 2D is used to add a geo-referencing feature that allows users to select basemap images and have vector shapes representing query results mapped over the basemap in the image panel. The two methods together address a wide range of user access needs and will allow for widespread use of SIOExplorer.

  4. LimsPortal and BonsaiLIMS: development of a lab information management system for translational medicine.

    PubMed

    Bath, Timothy G; Bozdag, Selcuk; Afzal, Vackar; Crowther, Daniel

    2011-05-13

    Laboratory Information Management Systems (LIMS) are an increasingly important part of modern laboratory infrastructure. As typically very sophisticated software products, LIMS often require considerable resources to select, deploy and maintain. Larger organisations may have access to specialist IT support to assist with requirements elicitation and software customisation, however smaller groups will often have limited IT support to perform the kind of iterative development that can resolve the difficulties that biologists often have when specifying requirements. Translational medicine aims to accelerate the process of treatment discovery by bringing together multiple disciplines to discover new approaches to treating disease, or novel applications of existing treatments. The diverse set of disciplines and complexity of processing procedures involved, especially with the use of high throughput technologies, bring difficulties in customizing a generic LIMS to provide a single system for managing sample related data within a translational medicine research setting, especially where limited IT support is available. We have designed and developed a LIMS, BonsaiLIMS, around a very simple data model that can be easily implemented using a variety of technologies, and can be easily extended as specific requirements dictate. A reference implementation using Oracle 11 g database and the Python framework, Django is presented. By focusing on a minimal feature set and a modular design we have been able to deploy the BonsaiLIMS system very quickly. The benefits to our institute have been the avoidance of the prolonged implementation timescales, budget overruns, scope creep, off-specifications and user fatigue issues that typify many enterprise software implementations. The transition away from using local, uncontrolled records in spreadsheet and paper formats to a centrally held, secured and backed-up database brings the immediate benefits of improved data visibility, audit and overall data quality. The open-source availability of this software allows others to rapidly implement a LIMS which in itself might sufficiently address user requirements. In situations where this software does not meet requirements, it can serve to elicit more accurate specifications from end-users for a more heavyweight LIMS by acting as a demonstrable prototype.

  5. Towards Test Driven Development for Computational Science with pFUnit

    NASA Technical Reports Server (NTRS)

    Rilee, Michael L.; Clune, Thomas L.

    2014-01-01

    Developers working in Computational Science & Engineering (CSE)/High Performance Computing (HPC) must contend with constant change due to advances in computing technology and science. Test Driven Development (TDD) is a methodology that mitigates software development risks due to change at the cost of adding comprehensive and continuous testing to the development process. Testing frameworks tailored for CSE/HPC, like pFUnit, can lower the barriers to such testing, yet CSE software faces unique constraints foreign to the broader software engineering community. Effective testing of numerical software requires a comprehensive suite of oracles, i.e., use cases with known answers, as well as robust estimates for the unavoidable numerical errors associated with implementation with finite-precision arithmetic. At first glance these concerns often seem exceedingly challenging or even insurmountable for real-world scientific applications. However, we argue that this common perception is incorrect and driven by (1) a conflation between model validation and software verification and (2) the general tendency in the scientific community to develop relatively coarse-grained, large procedures that compound numerous algorithmic steps.We believe TDD can be applied routinely to numerical software if developers pursue fine-grained implementations that permit testing, neatly side-stepping concerns about needing nontrivial oracles as well as the accumulation of errors. We present an example of a successful, complex legacy CSE/HPC code whose development process shares some aspects with TDD, which we contrast with current and potential capabilities. A mix of our proposed methodology and framework support should enable everyday use of TDD by CSE-expert developers.

  6. Tissues from the irradiated dog/mouse archive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gayle Woloschak

    The purpose of this project is to organize the databases/information and organize and move the tissues from the long-term dog (4,000 dogs) and mouse (over 30,000 mice) radiation experiments done at Argonne National Laboratory during the 1970's and 80's to Northwestern University. These studies were done with the intention of understanding the effects of exposure to radiation at a variety of different doses, dose-rates, and radiation qualities on end-points such as life-shortening, carcinogenesis, cause of death, shifts in disease incidence and other biological parameters. Organ and tissue samples from these animals including cancers, metastases and other significant degenerative and inflammatorymore » lesions and those in a regular protocol of normal tissues were preserved in paraffin blocks, tissue impressions and sections and represent a great resource for the radiation biology community. These collections are particularly significant since these experiments are not likely to be repeated because of the extreme cost of monies and time for such large-scale animal studies. The long-term goal is to make these tissues and databases available to the wider scientific community so that questions such as tissue sensitivity, early and late effects, low dose and protracted dose responses of normal and tumor tissues, etc. can be examined and defined. Recent advances in biology particularly at the subcellular and molecular level now permit microarray-based gene expression array analyses from paraffin-embedded tissues (where RNA samples are significantly degraded), synchrotron-based studies of metal and other elemental distribution patterns in tissues, PCR-based analyses for mutation detection, and other similar approaches that were not available when the long¬ term animal studies were designed and initiated. Understanding the basis and progression of radiation damage should also permit rational approaches to prevention and mitigation of those damages. Therefore, as stated earlier, these tissues and their related documentation, represent a significant resource for future studies. For this project, we propose to accomplish the following objectives: (1) inventory and organize the tissues, blood smears, wet-tissues and paper-¬based information that is available in the tissue bank at Argonne National Laboratory; (2) convert the existing Oracle database of the mouse studies to MS Access( the dog data is already in this format which is far more user friendly and widely used in business and research) , (3) move the remaining samples and documentation from dogs that had been transferred from ANL to New Mexico (in Dr. F. Hahn's care) to Northwestern University and add these to the inventory; (4) move the tissues and Access database at Argonne National Laboratory to Northwestern University.« less

  7. Multi-Resource Fair Queueing for Packet Processing

    DTIC Science & Technology

    2012-06-19

    Huawei , Intel, MarkLogic, Microsoft, NetApp, Oracle, Quanta, Splunk, VMware and by DARPA (contract #FA8650-11-C-7136). Multi-Resource Fair Queueing for...Google PhD Fellowship, gifts from Amazon Web Services, Google, SAP, Blue Goji, Cisco, Cloud- era, Ericsson, General Electric, Hewlett Packard, Huawei

  8. Computer Aided Management for Information Processing Projects.

    ERIC Educational Resources Information Center

    Akman, Ibrahim; Kocamustafaogullari, Kemal

    1995-01-01

    Outlines the nature of information processing projects and discusses some project management programming packages. Describes an in-house interface program developed to utilize a selected project management package (TIMELINE) by using Oracle Data Base Management System tools and Pascal programming language for the management of information system…

  9. Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1982-01-01

    Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…

  10. A resolution congratulating Oracle Team USA for winning the 34th America's Cup.

    THOMAS, 113th Congress

    Sen. Feinstein, Dianne [D-CA

    2013-11-05

    Senate - 11/05/2013 Submitted in the Senate, considered, and agreed to without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:

  11. Childhood outcomes after prescription of antibiotics to pregnant women with spontaneous preterm labour: 7-year follow-up of the ORACLE II trial.

    PubMed

    Kenyon, S; Pike, K; Jones, D R; Brocklehurst, P; Marlow, N; Salt, A; Taylor, D J

    2008-10-11

    The ORACLE II trial compared the use of erythromycin and/or amoxicillin-clavulanate (co-amoxiclav) with that of placebo for women in spontaneous preterm labour and intact membranes, without overt signs of clinical infection, by use of a factorial randomised design. The aim of the present study--the ORACLE Children Study II--was to determine the long-term effects on children after exposure to antibiotics in this clinical situation. We assessed children at age 7 years born to the 4221 women who had completed the ORACLE II study and who were eligible for follow-up with a structured parental questionnaire to assess the child's health status. Functional impairment was defined as the presence of any level of functional impairment (severe, moderate, or mild) derived from the mark III Multi-Attribute Health Status classification system. Educational outcomes were assessed with national curriculum test results for children resident in England. Outcome was determined for 3196 (71%) eligible children. Overall, a greater proportion of children whose mothers had been prescribed erythromycin, with or without co-amoxiclav, had any functional impairment than did those whose mothers had received no erythromycin (658 [42.3%] of 1554 children vs 574 [38.3%] of 1498; odds ratio 1.18, 95% CI 1.02-1.37). Co-amoxiclav (with or without erythromycin) had no effect on the proportion of children with any functional impairment, compared with receipt of no co-amoxiclav (624 [40.7%] of 1523 vs 608 [40.0%] of 1520; 1.03, 0.89-1.19). No effects were seen with either antibiotic on the number of deaths, other medical conditions, behavioural patterns, or educational attainment. However, more children whose mothers had received erythromycin or co-amoxiclav developed cerebral palsy than did those born to mothers who received no erythromycin or no co-amoxiclav, respectively (erythromycin: 53 [3.3%] of 1611 vs 27 [1.7%] of 1562, 1.93, 1.21-3.09; co-amoxiclav: 50 [3.2%] of 1587 vs 30 [1.9%] of 1586, 1.69, 1.07-2.67). The number needed to harm with erythromycin was 64 (95% CI 37-209) and with co-amoxiclav 79 (42-591). The prescription of erythromycin for women in spontaneous preterm labour with intact membranes was associated with an increase in functional impairment among their children at 7 years of age. The risk of cerebral palsy was increased by either antibiotic, although the overall risk of this condition was low. UK Medical Research Council.

  12. Computation in generalised probabilisitic theories

    NASA Astrophysics Data System (ADS)

    Lee, Ciarán M.; Barrett, Jonathan

    2015-08-01

    From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.

  13. Changes in Patterns of Teacher Interaction in Primary Classrooms: 1976-96.

    ERIC Educational Resources Information Center

    Galton, Maurice; Hargreaves, Linda; Comber, Chris; Wall, Debbie; Pell, Tony

    1999-01-01

    Addresses the effectiveness of teaching methods in English primary school classrooms. Evaluates the interventions designed to change primary educators' teaching methods by replicating the Observational Research and Classroom Learning Evaluation (ORACLE) study that was originally conducted in 1976. Compares the results from the original study to…

  14. Change and Continuity in the Primary School: The Research Evidence.

    ERIC Educational Resources Information Center

    Galton, Maurice

    1987-01-01

    This article reviews findings from a 1975 through 1980 study called ORACLE (Observational Research and Classroom Learning Evaluation). Maintains that the data showed only partial implementation of the Plowden report recommendations. Seeks to explain the reasons for inconsistencies in implementation and offers suggestions for redefining progressive…

  15. Oracle or Monacle: Research Concerning Attitudes Toward Feminism.

    ERIC Educational Resources Information Center

    Prescott, Suzanne; Schmid, Margaret

    Both popular studies and more serious empirical studies of attitudes toward feminism are reviewed beginning with Clifford Kirkpatrick's early empirical work and including the more recent empirical studies completed since 1970. The review examines the contents of items used to measure feminism, and the methodology and sampling used in studies, as…

  16. Spurring Innovation

    ERIC Educational Resources Information Center

    McLester, Susan

    2005-01-01

    For many teachers and students, it was ThinkQuest that gave them their first experiences with the Internet. A collaborative competition, created by Advanced Network & Services (now managed by the Oracle Education Foundation), ThinkQuest asked students to form teams and create Web sites that were designed to help other students learn something. In…

  17. Computer-Assisted Learning in Language Arts

    ERIC Educational Resources Information Center

    Serwer, Blanche L.; Stolurow, Lawrence M.

    1970-01-01

    A description of computer program segments in the feasibility and development phase of Operationally Relevant Activities for Children's Language Experience (Project ORACLE); original form of this paper was prepared by Serwer for presentation to annual meeting of New England Research Association (1st, Boston College, June 5-6, 1969). (Authors/RD)

  18. Oracle Academy: Four Success Stories Model the Competitive Edge of CTE

    ERIC Educational Resources Information Center

    Techniques: Connecting Education and Careers, 2004

    2004-01-01

    "Vocational Education's" transformation to "Career and Technical Education" (CTE) is clearly underway as evidenced by the increased number of advanced topics and knowledge depth being taught in courses today. Technology advances across all industry sectors impact all CTE departments--from Agricultural Science to Business and Marketing. Building…

  19. Teletext--Prestel's Big Brother.

    ERIC Educational Resources Information Center

    Hughes, Geoffrey

    Prestel, Oracle, and Ceefax are telephone based video text systems currently in use in Great Britain. Rather than being considered as competitors, they should be viewed as complementary media with separate functions based on their differences. All use home television sets to receive information in print, and all broadcast on spare TV lines in the…

  20. Antibiotics in agroecosystems: state of the science “ARASOS” workshop summary

    USDA-ARS?s Scientific Manuscript database

    In August 2014, the Biosphere2 Conference Center in Oracle, Arizona was the site of a sunny three-day workshop on “Antibiotics in Agroecosystems: State of the Science.” Attended by 42 people, participants included scientists and others from academia, government agencies (including EPA and USDA), an...

  1. A Pioneer in Online Education Tries a MOOC

    ERIC Educational Resources Information Center

    Kirschner, Ann

    2012-01-01

    Surely "massive open online course" (MOOC) has one of the ugliest acronyms of recent years, lacking the deliberate playfulness of Yahoo (Yet Another Hierarchical Officious Oracle) or the droll shoulder shrug suggested by the word "snafu" (Situation Normal, All Fouled Up). The author is not a complete neophyte to online…

  2. America’s Heroes -- Preparing Our Vets for Civilian Employment

    DTIC Science & Technology

    2012-04-16

    business environment . Oracle’s Injured Veteran Job and Training Program offers on-the-job training for soldiers wounded in the Afghanistan and Iraq wars...Awards.aspx 10 ^ http://www.blackstone.com/news/news-views/ blackstone -portfolio-company- alliedbarton-supports-u.s.-veterans-and-reservists 11 ^ http

  3. Risk and the Internet of Things: Damocles, Pythia, or Pandora?

    PubMed

    Showell, Chris

    2016-01-01

    The Internet of Things holds great promise for healthcare, but also embodies a number of risks. This analysis suggests that the risks are as yet poorly delineated (having features in common with the oracle Pythia, and with Pandora and her box), and that adopting the precautionary principle is appropriate.

  4. Expert system decision support for low-cost launch vehicle operations

    NASA Technical Reports Server (NTRS)

    Szatkowski, G. P.; Levin, Barry E.

    1991-01-01

    Progress in assessing the feasibility, benefits, and risks associated with AI expert systems applied to low cost expendable launch vehicle systems is described. Part one identified potential application areas in vehicle operations and on-board functions, assessed measures of cost benefit, and identified key technologies to aid in the implementation of decision support systems in this environment. Part two of the program began the development of prototypes to demonstrate real-time vehicle checkout with controller and diagnostic/analysis intelligent systems and to gather true measures of cost savings vs. conventional software, verification and validation requirements, and maintainability improvement. The main objective of the expert advanced development projects was to provide a robust intelligent system for control/analysis that must be performed within a specified real-time window in order to meet the demands of the given application. The efforts to develop the two prototypes are described. Prime emphasis was on a controller expert system to show real-time performance in a cryogenic propellant loading application and safety validation implementation of this system experimentally, using commercial-off-the-shelf software tools and object oriented programming techniques. This smart ground support equipment prototype is based in C with imbedded expert system rules written in the CLIPS protocol. The relational database, ORACLE, provides non-real-time data support. The second demonstration develops the vehicle/ground intelligent automation concept, from phase one, to show cooperation between multiple expert systems. This automated test conductor (ATC) prototype utilizes a knowledge-bus approach for intelligent information processing by use of virtual sensors and blackboards to solve complex problems. It incorporates distributed processing of real-time data and object-oriented techniques for command, configuration control, and auto-code generation.

  5. Reducing Information Overload in Large Seismic Data Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.

    2000-08-02

    Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their effortsmore » to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research into how to winnow the reference events in these large reconciled event sets, additional database query approaches have been developed to provide windows into these datasets. These custom built content analysis tools help identify dataset characteristics that can potentially aid in providing a basis for comparing similar reference events in these large reconciled event sets. Once these characteristics can be identified, algorithms can be developed to create and add to the reduced set of events used by the Event Search Engine. These content analysis tools have already been useful in providing information on station coverage of the referenced events and basic statistical, information on events in the research datasets. The tools can also provide researchers with a quick way to find interesting and useful events within the research datasets. The tools could also be used as a means to review reference event datasets as part of a dataset delivery verification process. There has also been an effort to explore the usefulness of commercially available web-based software to help with this problem. The advantages of using off-the-shelf software applications, such as Oracle's WebDB, to manipulate, customize and manage research data are being investigated. These types of applications are being examined to provide access to large integrated data sets for regional seismic research in Asia. All of these software tools would provide the researcher with unprecedented power without having to learn the intricacies and complexities of relational database systems.« less

  6. Development of spatial scaling technique of forest health sample point information

    NASA Astrophysics Data System (ADS)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  7. McLuhan: Pro and Con.

    ERIC Educational Resources Information Center

    Rosenthal, Raymond, Ed.

    Twenty-one critical essays on the ideas and works of Marshall McLuhan are offered in this review. In the course of the essays, McLuhan is characterized alternately as a genius, an extrapolator, and an oracle; a defender of the choice of choicelessness; a generalizer who rearranges, misinterprets, and misreads the facts to support his theses; an…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hang Bae

    A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.

  9. From the Oracles of the Temple of Janus: "Chicago Teachers Union v. Hudson."

    ERIC Educational Resources Information Center

    Vieira, Edwin, Jr.

    1986-01-01

    Examines "Chicago Teachers Union v. Hudson," a United States Supreme Court decision guaranteeing non-union government workers specific protections of procedural due process that certain educational and teacher unions had failed to recognize. Decries the "Hudson" decision for separating labor law from laws governing the rest of…

  10. Medicine and the Silent Oracle: An Exercise in Uncertainty

    ERIC Educational Resources Information Center

    Belling, Catherine

    2006-01-01

    This article describes a simple in-class exercise in reading and writing that, by asking participants to write their own endings for a short narrative taken from the "Journal of the American Medical Association," prompts them to reflect on the problem of uncertainty in medicine and to apply the literary-critical techniques of close…

  11. Practical Considerations for Conducting Delphi Studies: The Oracle Enters a New Age.

    ERIC Educational Resources Information Center

    Eggers, Renee M.; Jones, Charles M.

    1998-01-01

    In addition to giving an overview of Delphi methodology and describing the methodology used by the researchers in two Delphi studies, the authors provide information about electronic communication in Delphi studies. Also provided are suggestions that can be used in a Delphi study involving any form of communication. (SLD)

  12. Fundamental problems in provable security and cryptography.

    PubMed

    Dent, Alexander W

    2006-12-15

    This paper examines methods for formally proving the security of cryptographic schemes. We show that, despite many years of active research and dozens of significant results, there are fundamental problems which have yet to be solved. We also present a new approach to one of the more controversial aspects of provable security, the random oracle model.

  13. Oracle Bones and Mandarin Tones: Demystifying the Chinese Language.

    ERIC Educational Resources Information Center

    Michigan Univ., Ann Arbor. Project on East Asian Studies in Education.

    This publication will provide secondary level students with a basic understanding of the development and structure of Chinese (guo yu) language characters. The authors believe that demystifying the language helps break many cultural barriers. The written language is a good place to begin because its pictographic nature is appealing and inspires…

  14. Early Development of Demonstratives in Pre-Qin Chinese

    ERIC Educational Resources Information Center

    Deng, Lin

    2011-01-01

    This dissertation offers a new dynamic account of the evolution of the demonstrative system in pre-Qin Chinese based on a comprehensive linguistic analysis of the phonological, morphological, syntactic, semantic, and pragmatic aspects of demonstratives attested in two corpora of excavated texts, i.e. the oracle-bone inscriptions dated to the late…

  15. Modern Ratio: The Ultimate Arbiter in 17th Century Native Dreams.

    ERIC Educational Resources Information Center

    Pomedli, Michael

    Seventeenth century Jesuit analysis of Indian attitudes toward dreams was largely negative. While Indians looked on their dreams as ordinances and oracles, the Jesuits criticized reliance on such irrational messages. Jesuit critiques fell into three categories: the dream as a sign of diabolical possession, the dream as illusion purporting to be…

  16. Is Open Source the ERP Cure-All?

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2008-01-01

    Conventional and hosted applications thrive, but open source ERP (enterprise resource planning) is coming on strong. In many ways, the evolution of the ERP market is littered with ironies. When Oracle began buying up customer relationship management (CRM) and ERP companies, some universities worried that they would be left with fewer choices and…

  17. 77 FR 8869 - Granting of Request for Early Termination of the Waiting Period Under the Premerger Notification...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ...; Symantec Corporation. 20120376 G Project Barbour Holdings Corporation; Blue Coat Systems, Inc.; Project....; Marathon Fund Limited Partnership V; RTI International Metals, Inc. 20120422 G Sigma-Aldrich Corporation; Avista Capital Partners, L.P.; Sigma-Aldrich Corporation. 01/24/2012 20120151 G Oracle Corporation; Right...

  18. An Enterprise System and a Business Simulation Provide Many Opportunities for Interdisciplinary Teaching

    ERIC Educational Resources Information Center

    Kreie, Jennifer; Shannon, James; Mora-Monge, Carlo A.

    2011-01-01

    Enterprise systems provide companies with centralized data management, business process support and integrated data flow between functional areas. Thanks to academic alliances offered by companies such as SAP, Oracle, Microsoft and others, universities can also take advantage of the integrated features of enterprise system to give business…

  19. Aerosol-Radiation-Cloud Interactions in the South-East Atlantic: Model-Relevant Observations and the Beneficiary Modeling Efforts in the Realm of the EVS-2 Project ORACLES

    NASA Technical Reports Server (NTRS)

    Redemann, Jens

    2018-01-01

    Globally, aerosols remain a major contributor to uncertainties in assessments of anthropogenically-induced changes to the Earth climate system, despite concerted efforts using satellite and suborbital observations and increasingly sophisticated models. The quantification of direct and indirect aerosol radiative effects, as well as cloud adjustments thereto, even at regional scales, continues to elude our capabilities. Some of our limitations are due to insufficient sampling and accuracy of the relevant observables, under an appropriate range of conditions to provide useful constraints for modeling efforts at various climate scales. In this talk, I will describe (1) the efforts of our group at NASA Ames to develop new airborne instrumentation to address some of the data insufficiencies mentioned above; (2) the efforts by the EVS-2 ORACLES project to address aerosol-cloud-climate interactions in the SE Atlantic and (3) time permitting, recent results from a synergistic use of A-Train aerosol data to test climate model simulations of present-day direct radiative effects in some of the AEROCOM phase II global climate models.

  20. Password-only authenticated three-party key exchange with provable security in the standard model.

    PubMed

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho

    2014-01-01

    Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solis, John Hector

    In this paper, we present a modular framework for constructing a secure and efficient program obfuscation scheme. Our approach, inspired by the obfuscation with respect to oracle machines model of [4], retains an interactive online protocol with an oracle, but relaxes the original computational and storage restrictions. We argue this is reasonable given the computational resources of modern personal devices. Furthermore, we relax the information-theoretic security requirement for computational security to utilize established cryptographic primitives. With this additional flexibility we are free to explore different cryptographic buildingblocks. Our approach combines authenticated encryption with private information retrieval to construct a securemore » program obfuscation framework. We give a formal specification of our framework, based on desired functionality and security properties, and provide an example instantiation. In particular, we implement AES in Galois/Counter Mode for authenticated encryption and the Gentry-Ramzan [13]constant communication-rate private information retrieval scheme. We present our implementation results and show that non-trivial sized programs can be realized, but scalability is quickly limited by computational overhead. Finally, we include a discussion on security considerations when instantiating specific modules.« less

  2. Financial management using a computerized system for evaluating health care invoices.

    PubMed

    Magnezi, Racheli; Ashkenazi, Isaac

    2005-02-01

    The Medical Corps of the Israel Defense Forces (IDF) provides health care services for hundreds of thousands of soldiers in IDF clinics and by purchasing services from civilian institutes. Monthly invoices from civilian institutes are so numerous that most are paid with insufficient scrutiny and valuable information regarding soldiers' health care is lost. Our objective was to develop a computerized system for reviewing invoices and gathering data. Based on Oracle software (Oracle, Redwood Shores, California), the system stores the terms of agreements with medical institutes, enters billing data, calculates invoice totals, manages information, and generates reports. It automatically checks for duplicate invoices and confirms payment. The system allows users to view data for decision-making, creates insurance claim files, identifies incorrect charges, assists in quality assurance, and maintains personal patient records. With the system in operation since 2001, savings significantly increased, to approximately 5% of the IDF health care budget. On the basis of information gathered by the system, changes in medical procedures were implemented that are expected to generate even greater savings.

  3. KISS--a new approach to self-controlled e-learning of selected chapters in Medical Engineering and other fields at bachelor and master course level.

    PubMed

    Hutten, Helmut; Stiegmaier, Wolfgang; Rauchegger, Günter

    2005-09-01

    Modern life style requires new methods for individual lifelong learning, based on access at every time and from every place. This fundamental requirement is provided by the Internet. The Internet technology promises an increasing potential in the future for e-learning or tele-learning. Some special requirements are password-controlled access, applicability of most commercially available PCs and laptops equipped with standard software (Microsoft Internet Explorer 6.0), central evaluation of the students' performance, inclusion of an examination part, provision of a picture gallery and a comprehensive glossary accessible in the learning mode. The KISS-shell has been developed based on the Oracle 10g application server in combination with a relational data base (Oracle 8i) on the server side and a web browser based interface using JavaScript for user control of data input on the client side (Kontrolliertes Intelligentes Selbstgesteuertes Studium, KISS). The first tutorial application has been realized with a chapter about cardiac pacemakers. The weight of that chapter (or module) is about 2 ECTS (i.e. the equivalent of 30 working hours; European Credit Transfer System, ECTS). The internal structure of the chapter is organized in sequential mode. It consists of five main sections. Each of those five sections is subdivided into five subsections of comparable length. Progression from one subsection to the next is possible only after successfully passing through the respective examination. The whole learning programme with the pacemaker chapter has been evaluated by 10 students. The system will be presented together with first experiences including the evaluation results. Until now the program has not been used for training purposes.

  4. Marine Stratocumulus Properties from the FPDR - PDI as a Function of Aerosol during ORACLES

    NASA Astrophysics Data System (ADS)

    Small Griswold, J. D.; Heikkila, A.

    2016-12-01

    Aerosol-cloud interactions in the southeastern Atlantic (SEA) region were investigated during year 1 of the ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) field project in Aug-Sept 2016. This region is of interest due to seasonally persistent marine stratocumulus cloud decks that are an important component of the climate system due to their radiative and hydrologic impacts. The SEA deck is unique due to the interactions between these clouds and transported biomass burning aerosol during the July-October fire season. These biomass burning aerosol play multiple roles in modifying the cloud deck through interactions with radiation as absorbing aerosol and through modifications to cloud microphysical properties as cloud condensation nuclei. This work uses in situcloud data obtained with a Flight Probe Dual Range - Phase Doppler Interferometer (FPDR - PDI), standard aerosol instrumentation on board the NASA P-3, and reanalysis data to investigate Aerosol-Cloud Interactions (ACI). The FPDR - PDI provides unique cloud microphysical observations of individual cloud drop arrivals allowing for the computation of a variety of microphysical cloud properties including individual drop size, cloud drop number concentration, cloud drop size distributions, liquid water content, and cloud thickness. The FPDR - PDI measurement technique also provides droplet spacing and drop velocity information which is used to investigate turbulence and entrainment mixing processes. We use aerosol information such as average background aerosol amount (low, mid, high) and location relative to cloud (above or mixing) to sort FPDR - PDI cloud properties. To control for meteorological co-variances we further sort the data within aerosol categories by lower tropospheric stability, vertical velocity, and surface wind direction. We then determine general marine stratocumulus cloud characteristics under each of the various aerosol categories to investigate ACI in the SEA.

  5. Monitoring and controlling ATLAS data management: The Rucio web user interface

    NASA Astrophysics Data System (ADS)

    Lassnig, M.; Beermann, T.; Vigne, R.; Barisits, M.; Garonne, V.; Serfon, C.

    2015-12-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for usergenerated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like web-browsers as well as remote services. This contribution will detail the reasons for these principles and the design choices taken. Additionally, the implementation, the interactions with external systems, and an evaluation of the system in production, both from a technological and user perspective, conclude this contribution.

  6. Lightweight Privacy-Preserving Authentication Protocols Secure against Active Attack in an Asymmetric Way

    NASA Astrophysics Data System (ADS)

    Cui, Yank; Kobara, Kazukuni; Matsuura, Kanta; Imai, Hideki

    As pervasive computing technologies develop fast, the privacy protection becomes a crucial issue and needs to be coped with very carefully. Typically, it is difficult to efficiently identify and manage plenty of the low-cost pervasive devices like Radio Frequency Identification Devices (RFID), without leaking any privacy information. In particular, the attacker may not only eavesdrop the communication in a passive way, but also mount an active attack to ask queries adaptively, which is obviously more dangerous. Towards settling this problem, in this paper, we propose two lightweight authentication protocols which are privacy-preserving against active attack, in an asymmetric way. That asymmetric style with privacy-oriented simplification succeeds to reduce the load of low-cost devices and drastically decrease the computation cost for the management of server. This is because that, unlike the usual management of the identities, our approach does not require any synchronization nor exhaustive search in the database, which enjoys great convenience in case of a large-scale system. The protocols are based on a fast asymmetric encryption with specialized simplification and only one cryptographic hash function, which consequently assigns an easy work to pervasive devices. Besides, our results do not require the strong assumption of the random oracle.

  7. Rule-based topology system for spatial databases to validate complex geographic datasets

    NASA Astrophysics Data System (ADS)

    Martinez-Llario, J.; Coll, E.; Núñez-Andrés, M.; Femenia-Ribera, C.

    2017-06-01

    A rule-based topology software system providing a highly flexible and fast procedure to enforce integrity in spatial relationships among datasets is presented. This improved topology rule system is built over the spatial extension Jaspa. Both projects are open source, freely available software developed by the corresponding author of this paper. Currently, there is no spatial DBMS that implements a rule-based topology engine (considering that the topology rules are designed and performed in the spatial backend). If the topology rules are applied in the frontend (as in many GIS desktop programs), ArcGIS is the most advanced solution. The system presented in this paper has several major advantages over the ArcGIS approach: it can be extended with new topology rules, it has a much wider set of rules, and it can mix feature attributes with topology rules as filters. In addition, the topology rule system can work with various DBMSs, including PostgreSQL, H2 or Oracle, and the logic is performed in the spatial backend. The proposed topology system allows users to check the complex spatial relationships among features (from one or several spatial layers) that require some complex cartographic datasets, such as the data specifications proposed by INSPIRE in Europe and the Land Administration Domain Model (LADM) for Cadastral data.

  8. Intellectual Production Supervision Perform based on RFID Smart Electricity Meter

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng

    2018-03-01

    This topic develops the RFID intelligent electricity meter production supervision project management system. The system is designed for energy meter production supervision in the management of the project schedule, quality and cost information management requirements in RFID intelligent power, and provide quantitative information more comprehensive, timely and accurate for supervision engineer and project manager management decisions, and to provide technical information for the product manufacturing stage file. From the angle of scheme analysis, design, implementation and test, the system development of production supervision project management system for RFID smart meter project is discussed. Focus on the development of the system, combined with the main business application and management mode at this stage, focuses on the energy meter to monitor progress information, quality information and cost based information on RFID intelligent power management function. The paper introduces the design scheme of the system, the overall client / server architecture, client oriented graphical user interface universal, complete the supervision of project management and interactive transaction information display, the server system of realizing the main program. The system is programmed with C# language and.NET operating environment, and the client and server platforms use Windows operating system, and the database server software uses Oracle. The overall platform supports mainstream information and standards and has good scalability.

  9. EnergyIQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MILLS, EVAN; MATTHE, PAUL; STOUFER, MARTIN

    2016-10-06

    EnergyIQ-the first "action-oriented" benchmarking tool for non-residential buildings-provides a standardized opportunity assessment based on benchmarking results. along with decision-support information to help refine action plans. EnergyIQ offers a wide array of benchmark metrics, with visuall as well as tabular display. These include energy, costs, greenhouse-gas emissions, and a large array of characteristics (e.g. building components or operational strategies). The tool supports cross-sectional benchmarking for comparing the user's building to it's peers at one point in time, as well as longitudinal benchmarking for tracking the performance of an individual building or enterprise portfolio over time. Based on user inputs, the toolmore » generates a list of opportunities and recommended actions. Users can then explore the "Decision Support" module for helpful information on how to refine action plans, create design-intent documentation, and implement improvements. This includes information on best practices, links to other energy analysis tools and more. The variety of databases are available within EnergyIQ from which users can specify peer groups for comparison. Using the tool, this data can be visually browsed and used as a backdrop against which to view a variety of energy benchmarking metrics for the user's own building. User can save their project information and return at a later date to continue their exploration. The initial database is the CA Commercial End-Use Survey (CEUS), which provides details on energy use and characteristics for about 2800 buildings (and 62 building types). CEUS is likely the most thorough survey of its kind every conducted. The tool is built as a web service. The EnergyIQ web application is written in JSP with pervasive us of JavaScript and CSS2. EnergyIQ also supports a SOAP based web service to allow the flow of queries and data to occur with non-browser implementations. Data are stored in an Oracle 10g database. References: Mills, Mathew, Brook and Piette. 2008. "Action Oriented Benchmarking: Concepts and Tools." Energy Engineering, Vol.105, No. 4, pp 21-40. LBNL-358E; Mathew, Mills, Bourassa, Brook. 2008. "Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California." Energy Engineering, Vol 105, No. 5, pp 6-18. LBNL-502E.« less

  10. Oracle. Three Histories of the 1980's: Implications for Education.

    ERIC Educational Resources Information Center

    Graeber, James J.; And Others

    Three hypothetical scenarios for the 1980's are presented as tools for exploring the future of the Des Moines, Iowa, Independent Community School District. Each is written in the past tense as a "history" and is followed by a set of implications for education. The first scenario conveys an optimistic view of current trends, suggesting…

  11. Untrained users derail your caboose? Learn to get on track

    NASA Technical Reports Server (NTRS)

    Bentley-Smith, M.

    2000-01-01

    You've implemented Oracle, and low-and-behold, your users can't bear it. They're making costly errors, finding creative work-arounds, and being mavericks with your system, because user training wasn't on track. This paper discusses how to build a proactive, comprehensive training program to get your users back on track.

  12. Dang An: A Brief History of the Chinese Imperial Archives and Its Administration

    ERIC Educational Resources Information Center

    Zhang, Wenxian

    2004-01-01

    Archives are preserved records of human activities. An account of Chinese archival development is as long as the written records of the Chinese history. Oracle bones, bronze ware, and wood and bamboo strips are the three most important recording forms of the early Chinese civilization, as they represented three major stages of archival development…

  13. Project Scheduling Tool for Maintaining Capability Interdependencies and Defence Program Investment: A User’s Guide

    DTIC Science & Technology

    2014-09-01

    The free, open-source Integrated Development Environment (IDE) NetBeans [11] was used in the creation of the Graphical User Interface (GUI) for the tool...Oracle Corporation (2013) NetBeans IDE 7.4, http://www.netbeans.org. 12. O’Shea, K., Pong, P. & Bulluss, G. (2012) Fit-for-Purpose Visualisation of

  14. Get a winning Oracle upgrade session using the quarterback approach

    NASA Technical Reports Server (NTRS)

    Anderson, G.

    2002-01-01

    Upgrades, upgrades... too much customer down time. Find out how we shrunk our production upgrade schedule 40% from our estimate of 10 days 12 hours to 6 days 2 hours using the quarterback approach. So your upgrade is not that complex, come anyway. This approach is scalable to any size project and will be extremely valuable.

  15. Quantum speedup in solving the maximal-clique problem

    NASA Astrophysics Data System (ADS)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  16. A technology ecosystem perspective on hospital management information systems: lessons from the health literature.

    PubMed

    Bain, Christopher A; Standing, Craig

    2009-01-01

    Hospital managers have a large range of information needs including quality metrics, financial reports, access information needs, educational, resourcing and decision support needs. Currently these needs involve interactions by managers with numerous disparate systems, both electronic such as SAP, Oracle Financials, PAS' (patient administration systems) like HOMER, and relevant websites; and paper-based systems. Hospital management information systems (HMIS) can be thought of sitting within a Technology Ecosystem (TE). In addition, Hospital Management Information Systems (HMIS) could benefit from a broader and deeper TE model, and the HMIS environment may in fact represents its own TE (the HMTE). This research will examine lessons from the health literature in relation to some of these issues, and propose an extension to the base model of a TE.

  17. The Design of Data Disaster Recovery of National Fundamental Geographic Information System

    NASA Astrophysics Data System (ADS)

    Zhai, Y.; Chen, J.; Liu, L.; Liu, J.

    2014-04-01

    With the development of information technology, data security of information system is facing more and more challenges. The geographic information of surveying and mapping is fundamental and strategic resource, which is applied in all areas of national economic, defence and social development. It is especially vital to national and social interests when such classified geographic information is directly concerning Chinese sovereignty. Several urgent problems that needs to be resolved for surveying and mapping are how to do well in mass data storage and backup, establishing and improving the disaster backup system especially after sudden natural calamity accident, and ensuring all sectors rapidly restored on information system will operate correctly. For overcoming various disaster risks, protect the security of data and reduce the impact of the disaster, it's no doubt the effective way is to analysis and research on the features of storage and management and security requirements, as well as to ensure that the design of data disaster recovery system suitable for the surveying and mapping. This article analyses the features of fundamental geographic information data and the requirements of storage management, three site disaster recovery system of DBMS plan based on the popular network, storage and backup, data replication and remote switch of application technologies. In LAN that synchronous replication between database management servers and the local storage of backup management systems, simultaneously, remote asynchronous data replication between local storage backup management systems and remote database management servers. The core of the system is resolving local disaster in the remote site, ensuring data security and business continuity of local site. This article focuses on the following points: background, the necessity of disaster recovery system, the analysis of the data achievements and data disaster recovery plan. Features of this program is to use a hardware-based data hot backup, and remote online disaster recovery support for Oracle database system. The achievement of this paper is in summarizing and analysing the common characteristics of disaster of surveying and mapping business system requirements, while based on the actual situation of the industry, designed the basic GIS disaster recovery solutions, and we also give the conclusions about key technologies of RTO and RPO.

  18. Election Verifiability: Cryptographic Definitions and an Analysis of Helios and JCJ

    DTIC Science & Technology

    2015-08-06

    SHA - 256 [98], we assume that H is a random oracle to prove Theorem 2. Moreover, we assume the sigma protocols used by Helios 4.0 satisfy the...Aspects in Security and Trust, volume 5491 of LNCS, pages 242– 256 . Springer, 2008. [68] Martin Hirt. Receipt-Free K-out-of-L Voting Based on ElGamal

  19. Metabonomics and medicine: the Biochemical Oracle.

    PubMed

    Mitchell, Steve; Holmes, Elaine; Carmichael, Paul

    2002-10-01

    Occasionally, a new idea emerges that has the potential to revolutionize an entire field of scientific endeavour. It is now within our grasp to be able to detect subtle perturbations within the phenomenally complex biochemical matrix of living organisms. The discipline of metabonomics promises an all-encompassing approach to understanding total, yet fundamental, changes occurring in disease processes, drug toxicity and cell function.

  20. Electronic News Delivery Needs Only FCC Encouragement for Invasion of U.S.A.

    ERIC Educational Resources Information Center

    Edwards, Kenneth

    The electronic newspaper, using a television screen and a small keying mechanism to call for specific pages of interest, currently exists in England and may soon be common in the United States. Three systems of electronic information delivery (teletext) now operate in England: CEEFAX, ORACLE, and VIEWDATA. The first two make use of two television…

  1. Lessons Learned from "The Oracle of Omaha" Warren Buffett

    ERIC Educational Resources Information Center

    Finkle, Todd A.

    2010-01-01

    This article documents a trip that was made by students from the University of Akron's College of Business to visit Warren Buffett, chairman and CEO of Berkshire Hathaway and the second richest man in the world at his Global Headquarters in Omaha, Nebraska. Every year, Buffett invites a select number of schools to Omaha to visit with him and tour…

  2. Spring 2006. Industry Study. Information Technology Industry

    DTIC Science & Technology

    2006-01-01

    unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 i Information Technology 2006 ABSTRACT...integration of processors, coprocessors, memory, storage, etc. into a user-programmable final product. C . Software (Apple, Oracle): These firms...able to support the U.S. national security interests. C . Manufacturing: The personal computer manufacturing industry has also changed considerably

  3. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  4. Broad-spectrum antibiotics for spontaneous preterm labour: the ORACLE II randomised trial. ORACLE Collaborative Group.

    PubMed

    Kenyon, S L; Taylor, D J; Tarnow-Mordi, W

    2001-03-31

    Preterm birth after spontaneous preterm labour is associated with death, neonatal disease, and long-term disability. Previous small trials of antibiotics for spontaneous preterm labour have reported inconclusive results. We did a randomised multicentre trial to resolve this issue. 6295 women in spontaneous preterm labour with intact membranes and without evidence of clinical infection were randomly assigned 250 mg erythromycin (n=1611), 325 mg co-amoxiclav (250 mg amoxicillin and 125 mg clavulanic acid; n=1550), both (n=1565), or placebo (n=1569) four times daily for 10 days or until delivery, whichever occurred earlier. The primary outcome measure was a composite of neonatal death, chronic lung disease, or major cerebral abnormality on ultrasonography before discharge from hospital. Analysis was by intention to treat. None of the trial antibiotics was associated with a lower rate of the composite primary outcome than placebo (erythromycin 90 [5.6%], co-amoxiclav 76 [5.0%], both antibiotics 91 [5.9%], vs placebo 78 [5.0%]). However, antibiotic prescription was associated with a lower occurrence of maternal infection. This trial provides evidence that antibiotics should not be routinely prescribed for women in spontaneous preterm labour without evidence of clinical infection.

  5. Exponentially more precise quantum simulation of fermions in the configuration interaction representation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; Berry, Dominic W.; Sanders, Yuval R.; Kivlichan, Ian D.; Scherer, Artur; Wei, Annie Y.; Love, Peter J.; Aspuru-Guzik, Alán

    2018-01-01

    We present a quantum algorithm for the simulation of molecular systems that is asymptotically more efficient than all previous algorithms in the literature in terms of the main problem parameters. As in Babbush et al (2016 New Journal of Physics 18, 033032), we employ a recently developed technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The algorithm of this paper involves simulation under an oracle for the sparse, first-quantized representation of the molecular Hamiltonian known as the configuration interaction (CI) matrix. We construct and query the CI matrix oracle to allow for on-the-fly computation of molecular integrals in a way that is exponentially more efficient than classical numerical methods. Whereas second-quantized representations of the wavefunction require \\widetilde{{ O }}(N) qubits, where N is the number of single-particle spin-orbitals, the CI matrix representation requires \\widetilde{{ O }}(η ) qubits, where η \\ll N is the number of electrons in the molecule of interest. We show that the gate count of our algorithm scales at most as \\widetilde{{ O }}({η }2{N}3t).

  6. The CLouds-Aerosol-Radiation Interaction and Forcing: Year 2017 (CLARIFY-2017) programme: deployment, synergies with ORACLES/LASIC/AEROCLO-SA and initial results

    NASA Astrophysics Data System (ADS)

    Haywood, J. M.; Abel, S.; Langridge, J.; Coe, H.; Blyth, A. M.; Bellouin, N.; Stier, P.; Field, P.; Carslaw, K. S.; Brooks, M.

    2017-12-01

    The CLoud-Aerosol-Radiation Interaction and Forcing: Year 2017 (CLARIFY-2017) programme is the UK's contribution to the intensive investigation of clouds, aerosols, and impacts on the radiation budget and hence climate over the South East Atlantic which centred on the deployment of the UK's FAAM BAe146 atmospheric research aircraft to Ascension Island in August/September 2017. There are strong synergies with the NASA ORACLES deployments of US NASA P3 and ER2 aircraft to Walvis Bay in August/September 2016 and Sao Tome in August 2017, the LASIC project which deployed the ARM Mobile Facility to Ascension Island in 2016/2017, and the AEROCLO-SA project which deployed the French research aircraft to Walvis Bay in September 2017. This talk will describe the forecasting tools that were developed and used in order to place the aircraft in the right place at the right time and will give an overview of the deployment. Initial results from a range of model, remote sensing and in-situ sampling instruments will be presented and compared against the findings of the other synergistic campaigns.

  7. Automated Test Case Generation for an Autopilot Requirement Prototype

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  8. Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model

    PubMed Central

    Nam, Junghyun; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon

    2014-01-01

    Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks. PMID:24977229

  9. Generalized query-based active learning to identify differentially methylated regions in DNA.

    PubMed

    Haque, Md Muksitul; Holder, Lawrence B; Skinner, Michael K; Cook, Diane J

    2013-01-01

    Active learning is a supervised learning technique that reduces the number of examples required for building a successful classifier, because it can choose the data it learns from. This technique holds promise for many biological domains in which classified examples are expensive and time-consuming to obtain. Most traditional active learning methods ask very specific queries to the Oracle (e.g., a human expert) to label an unlabeled example. The example may consist of numerous features, many of which are irrelevant. Removing such features will create a shorter query with only relevant features, and it will be easier for the Oracle to answer. We propose a generalized query-based active learning (GQAL) approach that constructs generalized queries based on multiple instances. By constructing appropriately generalized queries, we can achieve higher accuracy compared to traditional active learning methods. We apply our active learning method to find differentially DNA methylated regions (DMRs). DMRs are DNA locations in the genome that are known to be involved in tissue differentiation, epigenetic regulation, and disease. We also apply our method on 13 other data sets and show that our method is better than another popular active learning technique.

  10. Highlighting the Mechanism of the Quantum Speedup by Time-Symmetric and Relational Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2016-03-01

    Bob hides a ball in one of four drawers. Alice is to locate it. Classically she has to open up to three drawers, quantally just one. The fundamental reason for this quantum speedup is not known. The usual representation of the quantum algorithm is limited to the process of solving the problem. We extend it to the process of setting the problem. The number of the drawer with the ball becomes a unitary transformation of the random outcome of the preparation measurement. This extended, time-symmetric, representation brings in relational quantum mechanics. It is with respect to Bob and any external observer and cannot be with respect to Alice. It would tell her the number of the drawer with the ball before she opens any drawer. To Alice, the projection of the quantum state due to the preparation measurement should be retarded at the end of her search; in the input state of the search, the drawer number is determined to Bob and undetermined to Alice. We show that, mathematically, one can ascribe any part of the selection of the random outcome of the preparation measurement to the final Alice's measurement. Ascribing half of it explains the speedup of the present algorithm. This leaves the input state to Bob unaltered and projects that to Alice on a state of lower entropy where she knows half of the number of the drawer with the ball in advance. The quantum algorithm turns out to be a sum over histories in each of which Alice knows in advance that the ball is in a pair of drawers and locates it by opening one of the two. In the sample of quantum algorithms examined, the part of the random outcome of the initial measurement selected by the final measurement is one half or slightly above it. Conversely, given an oracle problem, the assumption it is one half always corresponds to an existing quantum algorithm and gives the order of magnitude of the number of oracle queries required by the optimal one.

  11. Aerosol-radiation-cloud interactions in the South-East Atlantic: first results from the ORACLES-2016 deployment and plans for future activities

    NASA Astrophysics Data System (ADS)

    Redemann, J.; Wood, R.; Zuidema, P.; Haywood, J. M.; Piketh, S.; Formenti, P.; Abel, S.

    2016-12-01

    Southern Africa produces almost a third of the Earth's biomass burning (BB) aerosol particles. Particles lofted into the mid-troposphere are transported westward over the South-East (SE) Atlantic, home to one of the three permanent subtropical stratocumulus (Sc) cloud decks in the world. The SE Atlantic stratocumulus deck interacts with the dense layers of BB aerosols that initially overlay the cloud deck, but later subside and may mix into the clouds. These interactions include adjustments to aerosol-induced solar heating and microphysical effects, and their global representation in climate models remains one of the largest uncertainties in estimates of future climate. Hence, new observations over the SE Atlantic have significant implications for regional and global climate change predictions. Our understanding of aerosol-cloud interactions in the SE Atlantic is severely limited. Most notably, we are missing knowledge on the absorptive and cloud nucleating properties of aerosols, including their vertical distribution relative to clouds, on the locations and degree of aerosol mixing into clouds, on the processes that govern cloud property adjustments, and on the importance of aerosol effects on clouds relative to co-varying synoptic scale meteorology. We describe first results from various synergistic, international research activities aimed at studying aerosol-cloud interactions in the region: NASA's airborne ORACLES (ObseRvations of Aerosols Above Clouds and Their IntEractionS) deployment in August/September of 2016, the DoE's LASIC (Layered Atlantic Smoke Interactions with Clouds) deployment of the ARM Mobile Facility to Ascension Island (June 2016 - October 2017), the ground-based components of CNRS' AEROCLO-sA (Aerosols Clouds and Fog over the west coast of southern Africa), and ongoing regional-scale integrative, process-oriented science efforts as part of SEALS-sA (Sea Earth Atmosphere Linkages Study in southern Africa). We expect to describe experimental setups as well as showcase initial aerosol and cloud property distributions. Furthermore, we discuss the implementation of future activities in these programs in coordination with the UK Met Office's CLARIFY (CLoud-Aerosol-Radiation Interactions and Forcing) experiment in 2017.

  12. Algorithms in Discrepancy Theory and Lattices

    NASA Astrophysics Data System (ADS)

    Ramadas, Harishchandra

    This thesis deals with algorithmic problems in discrepancy theory and lattices, and is based on two projects I worked on while at the University of Washington in Seattle. A brief overview is provided in Chapter 1 (Introduction). Chapter 2 covers joint work with Avi Levy and Thomas Rothvoss in the field of discrepancy minimization. A well-known theorem of Spencer shows that any set system with n sets over n elements admits a coloring of discrepancy O(√n). While the original proof was non-constructive, recent progress brought polynomial time algorithms by Bansal, Lovett and Meka, and Rothvoss. All those algorithms are randomized, even though Bansal's algorithm admitted a complicated derandomization. We propose an elegant deterministic polynomial time algorithm that is inspired by Lovett-Meka as well as the Multiplicative Weight Update method. The algorithm iteratively updates a fractional coloring while controlling the exponential weights that are assigned to the set constraints. A conjecture by Meka suggests that Spencer's bound can be generalized to symmetric matrices. We prove that n x n matrices that are block diagonal with block size q admit a coloring of discrepancy O(√n . √log(q)). Bansal, Dadush and Garg recently gave a randomized algorithm to find a vector x with entries in {-1,1} with ∥Ax∥infinity ≤ O(√log n) in polynomial time, where A is any matrix whose columns have length at most 1. We show that our method can be used to deterministically obtain such a vector. In Chapter 3, we discuss a result in the broad area of lattices and integer optimization, in joint work with Rebecca Hoberg, Thomas Rothvoss and Xin Yang. The number balancing (NBP) problem is the following: given real numbers a1,...,an in [0,1], find two disjoint subsets I1,I2 of [ n] so that the difference |sumi∈I1a i - sumi∈I2ai| of their sums is minimized. An application of the pigeonhole principle shows that there is always a solution where the difference is at most O √n/2n). Finding the minimum, however, is NP-hard. In polynomial time, the differencing algorithm by Karmarkar and Karp from 1982 can produce a solution with difference at most n-theta(log n), but no further improvement has been made since then. We show a relationship between NBP and Minkowski's Theorem. First we show that an approximate oracle for Minkowski's Theorem gives an approximate NBP oracle. Perhaps more surprisingly, we show that an approximate NBP oracle gives an approximate Minkowski oracle. In particular, we prove that any polynomial time algorithm that guarantees a solution of difference at most 2√n/2 n would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem.

  13. Model-Observation Comparisons of Biomass Burning Smoke and Clouds Over the Southeast Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Doherty, S. J.; Saide, P.; Zuidema, P.; Shinozuka, Y.; daSilva, A.; McFarquhar, G. M.; Pfister, L.; Carmichael, G. R.; Ferrada, G. A.; Howell, S. G.; Freitag, S.; Dobracki, A. N.; Smirnow, N.; Longo, K.; LeBlanc, S. E.; Adebiyi, A. A.; Podolske, J. R.; Small Griswold, J. D.; Hekkila, A.; Ueyama, R.; Wood, R.; Redemann, J.

    2017-12-01

    From August through October, in the SE Atlantic a plume of biomass burning smoke from central Africa overlays a relatively persistent stratocumulus-to-cumulus cloud deck. These smoke aerosols are believed to have significant climate forcing via aerosol-radiation and aerosol-cloud interactions, though both the magnitude and sign of this forcing is highly uncertain. This is due to large model spread in simulated aerosol and cloud properties and, until now, a sparsity of observations to constrain the models. Here we will present a comparison of both aerosol and cloud properties over the region using data from the first deployment of the NASA ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) field experiment (August-September 2016). We examine both horizontal and geographic variations in a range of aerosol and cloud properties and their position relative to each other, since the degree to which aerosols and clouds coincide both horizontally and vertically is perhaps the greatest source of uncertainty in their climate forcing.

  14. Software Assurance Curriculum Project Volume 1: Master of Software Assurance Reference Curriculum

    DTIC Science & Technology

    2010-08-01

    activity by providing a check on the relevance and currency of the process used to develop the MSwA2010 curriculum content. Figure 2 is an expansion of...random oracle model, symmetric crypto primitives, modes of operations, asymmetric crypto primitives (Chapter 5) [16] Detailed design...encryption, public key encryption, digital signatures, message authentication codes, crypto protocols, cryptanalysis, and further detailed crypto

  15. Risk Management and At-Risk Students: Pernicious Fantasies of Educator Omnipotence. The Cutting Edge

    ERIC Educational Resources Information Center

    Clabaugh, Gary K.

    2004-01-01

    For tens of thousands of years human beings relied on oracles, prophets, medicine men, and resignation to try to manage unknown risks. Then, in the transformative 200-year period from the mid-17th through the mid-19th centuries, a series of brilliant insights created groundbreaking tools for rational risk taking. Discoveries such as the theory of…

  16. Investigating the tool marks on oracle bones inscriptions from the Yinxu site (ca., 1319-1046 BC), Henan province, China.

    PubMed

    Zhao, Xiaolong; Tang, Jigen; Gu, Zhou; Shi, Jilong; Yang, Yimin; Wang, Changsui

    2016-09-01

    Oracle Bone Inscriptions in the Shang dynasty (1600-1046 BC) are the earliest well-developed writing forms of the Chinese character system, and their carving techniques have not been studied by tool marks analysis with microscopy. In this study, a digital microscope with three-dimensional surface reconstruction based on extended depth of focus technology was used to investigate tool marks on the surface of four pieces of oracle bones excavated at the eastern area of Huayuanzhuang, Yinxu site(ca., 1319-1046 BC), the last capital of the Shang dynasty, Henan province, China. The results show that there were two procedures to carve the characters on the analyzed tortoise shells. The first procedure was direct carving. The second was "outlining design," which means to engrave a formal character after engraving a draft with a pointed tool. Most of the strokes developed by an engraver do not overlap the smaller draft, which implies that the outlining design would be a sound way to avoid errors such as wrong and missing characters. The strokes of these characters have different shape at two ends and variations on width and depth of the grooves. Moreover, the bottom of the grooves is always rugged. Thus, the use of rotary wheel-cutting tools could be ruled out. In most cases, the starting points of the strokes are round or flat while the finishing points are always pointed. Moreover, the strokes should be engraved from top to bottom. When vertical or horizontal strokes had been engraved, the shell would be turned about 90 degrees to engrave the crossed strokes from top to bottom. There was no preferred order to engrave vertical or horizontal strokes. Since both sides of the grooves of the characters are neat and there exists no unorganized tool marks, then it is suggested that some sharp tools had been used for engraving characters on the shells. Microsc. Res. Tech. 79:827-832, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. [A review of the principle mythical gods in ancient greek medicine].

    PubMed

    Lips Castro, Walter; Urenda Arias, Catalina

    2014-12-01

    Like their prehistoric ancestors, the people of early civilizations lived related to the supernatural. Facing life-threatening situations, such as illness and death, people of ancient civilizations resorted to divination, prophecy, or the oracle. Regarding the curative activities of the ancient Greek civilization, there was a period in which these processes were exclusively linked to a supernatural perspective of the origin of disease. This stage of development of Greek healing practices corresponds to what might be called pre-Hippocratic Greek medicine. In ancient Greek civilization, myths exerted a strong influence on the concepts of disease and the healing processes. Although the first divine figure of Greek mythology related to medicine was Paeon, healing cults related to Apollo and Asclepius had a higher importance in tradition and Greek mythology. The Apollonian divine healing consisted in the ability to eliminate chaos and keep away evil, while in the Asclepian perspective, the role of healer was linked to specific procedures. Personal and medical skills allowed Asclepius to surpass his father and achieve his final consecration as a god of medicine.

  18. ProcessGene-Connect: SOA Integration between Business Process Models and Enactment Transactions of Enterprise Software Systems

    NASA Astrophysics Data System (ADS)

    Wasser, Avi; Lincoln, Maya

    In recent years, both practitioners and applied researchers have become increasingly interested in methods for integrating business process models and enterprise software systems through the deployment of enabling middleware. Integrative BPM research has been mainly focusing on the conversion of workflow notations into enacted application procedures, and less effort has been invested in enhancing the connectivity between design level, non-workflow business process models and related enactment systems such as: ERP, SCM and CRM. This type of integration is useful at several stages of an IT system lifecycle, from design and implementation through change management, upgrades and rollout. The paper presents an integration method that utilizes SOA for connecting business process models with corresponding enterprise software systems. The method is then demonstrated through an Oracle E-Business Suite procurement process and its ERP transactions.

  19. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  20. Field determination of the three-dimensional hydraulic conductivity tensor of anisotropic media: 2. Methodology and application to fractured rocks

    USGS Publications Warehouse

    Hsieh, Paul A.; Neuman, Shlomo P.; Stiles, Gary K.; Simpson, Eugene S.

    1985-01-01

    The analytical solutions developed in the first paper can be used to interpret the results of cross-hole tests conducted in anisotropic porous or fractured media. In the particular case where the injection and monitoring intervals are short relative to the distance between them, the test results can be analyzed graphically. From the transient variation of hydraulic head in a given monitoring interval, one can determine the directional hydraulic diffusivity, Kd(e)/Ss, and the quantity D/Ss, by curve matching. (Here Kd(e) is directional hydraulic conductivity parallel to the unit vector, e, pointing from the injection to the monitoring interval, Ss is specific storage, and D is the determinant of the hydraulic conductivity tensor, K.) The principal values and directions of K, together with Ss, can then be evaluated by fitting an ellipsoid to the square roots of the directional diffusivities. Ideally, six directional measurements are required. In practice, a larger number of measurements is often necessary to enable fitting an ellipsoid to the data by least squares. If the computed [Kd(e)/ss]½ values fluctuate so severely that a meaningful least squares fit is not possible, one has a direct indication that the subsurface does not behave as a uniform anisotropic medium on the scale of the test. Test results from a granitic rock near Oracle in southern Arizona are presented to illustrate how the method works for fractured rocks. At the site, the Oracle granite is shown to respond as a near-uniform, anisotropic medium, the hydraulic conductivity of which is strongly controlled by the orientations of major fracture sets. The cross-hole test results are shown to be consistent with the results of more than 100 single-hole packer tests conducted at the site.

Top