Database on Demand: insight how to build your own DBaaS
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio
2015-12-01
At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.
Cryptanalysis of Password Protection of Oracle Database Management System (DBMS)
NASA Astrophysics Data System (ADS)
Koishibayev, Timur; Umarova, Zhanat
2016-04-01
This article discusses the currently available encryption algorithms in the Oracle database, also the proposed upgraded encryption algorithm, which consists of 4 steps. In conclusion we make an analysis of password encryption of Oracle Database.
Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale
NASA Astrophysics Data System (ADS)
Canali, L.; Baranowski, Z.; Kothuri, P.
2017-10-01
This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.
Evolution of Database Replication Technologies for WLCG
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca
2015-12-01
In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.
Quantum search of a real unstructured database
NASA Astrophysics Data System (ADS)
Broda, Bogusław
2016-02-01
A simple circuit implementation of the oracle for Grover's quantum search of a real unstructured classical database is proposed. The oracle contains a kind of quantumly accessible classical memory, which stores the database.
Network Configuration of Oracle and Database Programming Using SQL
NASA Technical Reports Server (NTRS)
Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.
2000-01-01
A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).
Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M
2005-01-01
As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.
Operational Experience with the Frontier System in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter
2012-06-20
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less
Operational Experience with the Frontier System in CMS
NASA Astrophysics Data System (ADS)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen
2012-12-01
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.
Oracle Applications Patch Administration Tool (PAT) Beta Version
DOE Office of Scientific and Technical Information (OSTI.GOV)
2002-01-04
PAT is a Patch Administration Tool that provides analysis, tracking, and management of Oracle Application patches. This includes capabilities as outlined below: Patch Analysis & Management Tool Outline of capabilities: Administration Patch Data Maintenance -- track Oracle Application patches applied to what database instance & machine Patch Analysis capture text files (readme.txt and driver files) form comparison detail report comparison detail PL/SQL package comparison detail SQL scripts detail JSP module comparison detail Parse and load the current applptch.txt (10.7) or load patch data from Oracle Application database patch tables (11i) Display Analysis -- Compare patch to be applied with currentmore » Oracle Application installed Appl_top code versions Patch Detail Module comparison detail Analyze and display one Oracle Application module patch. Patch Management -- automatic queue and execution of patches Administration Parameter maintenance -- setting for directory structure of Oracle Application appl_top Validation data maintenance -- machine names and instances to patch Operation Patch Data Maintenance Schedule a patch (queue for later execution) Run a patch (queue for immediate execution) Review the patch logs Patch Management Reports« less
Stephens, Susie M.; Chen, Jake Y.; Davidson, Marcel G.; Thomas, Shiby; Trute, Barry M.
2005-01-01
As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html PMID:15608287
OrChem - An open source chemistry search engine for Oracle(R).
Rijnbeek, Mark; Steinbeck, Christoph
2009-10-22
Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Database interfaces on NASA's heterogeneous distributed database system
NASA Technical Reports Server (NTRS)
Huang, S. H. S.
1986-01-01
The purpose of the ORACLE interface is to enable the DAVID program to submit queries and transactions to databases running under the ORACLE DBMS. The interface package is made up of several modules. The progress of these modules is described below. The two approaches used in implementing the interface are also discussed. Detailed discussion of the design of the templates is shown and concluding remarks are presented.
Managing vulnerabilities and achieving compliance for Oracle databases in a modern ERP environment
NASA Astrophysics Data System (ADS)
Hölzner, Stefan; Kästle, Jan
In this paper we summarize good practices on how to achieve compliance for an Oracle database in combination with an ERP system. We use an integrated approach to cover both the management of vulnerabilities (preventive measures) and the use of logging and auditing features (detective controls). This concise overview focusses on the combination Oracle and SAP and it’s dependencies, but also outlines security issues that arise with other ERP systems. Using practical examples, we demonstrate common vulnerabilities and coutermeasures as well as guidelines for the use of auditing features.
NASA Astrophysics Data System (ADS)
Shahzad, Muhammad A.
1999-02-01
With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.
An Object-Relational Ifc Storage Model Based on Oracle Database
NASA Astrophysics Data System (ADS)
Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan
2016-06-01
With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.
Database Migration for Command and Control
2002-11-01
Sql - proprietary JDP Private Area Air defense data Defended asset list Oracle 7.3.2 - Automated process (OLTP...TADIL warnings Oracle 7.3.2 Flat File - Discrete transaction with data upds - NRT response required Pull mission data Std SQL ...level execution data Oracle 7.3 User update External interfaces Auto/manual backup Messaging Proprietary replication (internally) SQL Web server
Integrating Radar Image Data with Google Maps
NASA Technical Reports Server (NTRS)
Chapman, Bruce D.; Gibas, Sarah
2010-01-01
A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.
Call-Center Based Disease Management of Pediatric Asthmatics
2006-04-01
This study will measure the impact of CBDMP, which promotes patient education and empowerment, on multiple factors to include; patient/caregiver quality...Prepare and reproduce patient education materials, and informed consent work sheets. Contract Oracle data base administrator to establish database for... Patient education materials and informed consent documents were reproduced. A web-based Oracle data-base was determined to be both prohibitively
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
A case study for a digital seabed database: Bohai Sea engineering geology database
NASA Astrophysics Data System (ADS)
Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang
2006-07-01
This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.
The Battle Command Sustainment Support System: Initial Analysis Report
2016-09-01
diagnostic monitoring, asynchronous commits, and others. The other components of the NEDP include a main forwarding gateway /web server and one or more...NATIONAL ENTERPRISE DATA PORTAL ANALYSIS The NEDP is comprised of an Oracle Database 10g referred to as the National Data Server and several other...data forwarding gateways (DFG). Together, with the Oracle Database 10g, these components provide a heterogeneous data source that aligns various data
An approach in building a chemical compound search engine in oracle database.
Wang, H; Volarath, P; Harrison, R
2005-01-01
A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.
OrChem - An open source chemistry search engine for Oracle®
2009-01-01
Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521
[Application of the life sciences platform based on oracle to biomedical informations].
Zhao, Zhi-Yun; Li, Tai-Huan; Yang, Hong-Qiao
2008-03-01
The life sciences platform based on Oracle database technology is introduced in this paper. By providing a powerful data access, integrating a variety of data types, and managing vast quantities of data, the software presents a flexible, safe and scalable management platform for biomedical data processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Mathew; Bowen, Brian; Coles, Dwight
The Middleware Automated Deployment Utilities consists the these three components: MAD: Utility designed to automate the deployment of java applications to multiple java application servers. The product contains a front end web utility and backend deployment scripts. MAR: Web front end to maintain and update the components inside database. MWR-Encrypt: Web utility to convert a text string to an encrypted string that is used by the Oracle Weblogic application server. The encryption is done using the built in functions if the Oracle Weblogic product and is mainly used to create an encrypted version of a database password.
Space Station Freedom environmental database system (FEDS) for MSFC testing
NASA Technical Reports Server (NTRS)
Story, Gail S.; Williams, Wendy; Chiu, Charles
1991-01-01
The Water Recovery Test (WRT) at Marshall Space Flight Center (MSFC) is the first demonstration of integrated water recovery systems for potable and hygiene water reuse as envisioned for Space Station Freedom (SSF). In order to satisfy the safety and health requirements placed on the SSF program and facilitate test data assessment, an extensive laboratory analysis database was established to provide a central archive and data retrieval function. The database is required to store analysis results for physical, chemical, and microbial parameters measured from water, air and surface samples collected at various locations throughout the test facility. The Oracle Relational Database Management System (RDBMS) was utilized to implement a secured on-line information system with the ECLSS WRT program as the foundation for this system. The database is supported on a VAX/VMS 8810 series mainframe and is accessible from the Marshall Information Network System (MINS). This paper summarizes the database requirements, system design, interfaces, and future enhancements.
The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity
NASA Astrophysics Data System (ADS)
Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo
2015-05-01
The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.
A web-based platform for virtual screening.
Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J
2003-09-01
A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.
NASA Technical Reports Server (NTRS)
Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann
2008-01-01
Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.
Evolution of the architecture of the ATLAS Metadata Interface (AMI)
NASA Astrophysics Data System (ADS)
Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.
2015-12-01
The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2016-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
C-A1-03: Considerations in the Design and Use of an Oracle-based Virtual Data Warehouse
Bredfeldt, Christine; McFarland, Lela
2011-01-01
Background/Aims The amount of clinical data available for research is growing exponentially. As it grows, increasing the efficiency of both data storage and data access becomes critical. Relational database management systems (rDBMS) such as Oracle are ideal solutions for managing longitudinal clinical data because they support large-scale data storage and highly efficient data retrieval. In addition, they can greatly simplify the management of large data warehouses, including security management and regular data refreshes. However, the HMORN Virtual Data Warehouse (VDW) was originally designed based on SAS datasets, and this design choice has a number of implications for both the design and use of an Oracle-based VDW. From a design standpoint, VDW tables are designed as flat SAS datasets, which do not take full advantage of Oracle indexing capabilities. From a data retrieval standpoint, standard VDW SAS scripts do not take advantage of SAS pass-through SQL capabilities to enable Oracle to perform the processing required to narrow datasets to the population of interest. Methods Beginning in 2009, the research department at Kaiser Permanente in the Mid-Atlantic States (KPMA) has developed an Oracle-based VDW according to the HMORN v3 specifications. In order to take advantage of the strengths of relational databases, KPMA introduced an interface layer to the VDW data, using views to provide access to standardized VDW variables. In addition, KPMA has developed SAS programs that provide access to SQL pass-through processing for first-pass data extraction into SAS VDW datasets for processing by standard VDW scripts. Results We discuss both the design and performance considerations specific to the KPMA Oracle-based VDW. We benchmarked performance of the Oracle-based VDW using both standard VDW scripts and an initial pre-processing layer to evaluate speed and accuracy of data return. Conclusions Adapting the VDW for deployment in an Oracle environment required minor changes to the underlying structure of the data. Further modifications of the underlying data structure would lead to performance enhancements. Maximally efficient data access for standard VDW scripts requires an extra step that involves restricting the data to the population of interest at the data server level prior to standard processing.
Zaffino, Paolo; Ciardo, Delia; Raudaschl, Patrik; Fritscher, Karl; Ricotti, Rosalinda; Alterio, Daniela; Marvaso, Giulia; Fodor, Cristiana; Baroni, Guido; Amato, Francesco; Orecchia, Roberto; Jereczek-Fossa, Barbara Alicja; Sharp, Gregory C; Spadea, Maria Francesca
2018-05-22
Multi Atlas Based Segmentation (MABS) uses a database of atlas images, and an atlas selection process is used to choose an atlas subset for registration and voting. In the current state of the art, atlases are chosen according to a similarity criterion between the target subject and each atlas in the database. In this paper, we propose a new concept for atlas selection that relies on selecting the best performing group of atlases rather than the group of highest scoring individual atlases. Experiments were performed using CT images of 50 patients, with contours of brainstem and parotid glands. The dataset was randomly split in 2 groups: 20 volumes were used as an atlas database and 30 served as target subjects for testing. Classic oracle group selection, where atlases are chosen by the highest Dice Similarity Coefficient (DSC) with the target, was performed. This was compared to oracle Group selection, where all the combinations of atlas subgroups were considered and scored by computing DSC with the target subject. Subsequently, Convolutional Neural Networks (CNNs) were designed to predict the best group of atlases. The results were compared also with the selection strategy based on Normalized Mutual Information (NMI). Oracle group was proved to be significantly better that classic oracle selection (p<10-5). Atlas group selection led to a median±interquartile DSC of 0.740±0.084, 0.718±0.086 and 0.670±0.097 for brainstem and left/right parotid glands respectively, outperforming NMI selection 0.676±0.113, 0.632±0.104 and 0.606±0.118 (p<0.001) as well as classic oracle selection. The implemented methodology is a proof of principle that selecting the atlases by considering the performance of the entire group of atlases instead of each single atlas leads to higher segmentation accuracy, being even better then current oracle strategy. This finding opens a new discussion about the most appropriate atlas selection criterion for MABS. © 2018 Institute of Physics and Engineering in Medicine.
DEVELOPMENT AND APPLICATION OF THE DORIAN (DOSE-RESPONSE INFORMATION ANALYSIS) SYSTEM
Event Driven Messaging with Role-Based Subscriptions
NASA Technical Reports Server (NTRS)
Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed
2009-01-01
Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).
Database Systems and Oracle: Experiences and Lessons Learned
ERIC Educational Resources Information Center
Dunn, Deborah
2005-01-01
In a tight job market, IT professionals with database experience are likely to be in great demand. Companies need database personnel who can help improve access to and security of data. The events of September 11 have increased business' awareness of the need for database security, backup, and recovery procedures. It is our responsibility to…
Performance assessment of EMR systems based on post-relational database.
Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji
2012-08-01
Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher,D
Concerns about the long-term viability of SFS as the metadata store for HPSS have been increasing. A concern that Transarc may discontinue support for SFS motivates us to consider alternative means to store HPSS metadata. The obvious alternative is a commercial database. Commercial databases have the necessary characteristics for storage of HPSS metadata records. They are robust and scalable and can easily accommodate the volume of data that must be stored. They provide programming interfaces, transactional semantics and a full set of maintenance and performance enhancement tools. A team was organized within the HPSS project to study and recommend anmore » approach for the replacement of SFS. Members of the team are David Fisher, Jim Minton, Donna Mecozzi, Danny Cook, Bart Parliman and Lynn Jones. We examined several possible solutions to the problem of replacing SFS, and recommended on May 22, 2000, in a report to the HPSS Technical and Executive Committees, to change HPSS into a database application over either Oracle or DB2. We recommended either Oracle or DB2 on the basis of market share and technical suitability. Oracle and DB2 are dominant offerings in the market, and it is in the best interest of HPSS to use a major player's product. Both databases provide a suitable programming interface. Transaction management functions, support for multi-threaded clients and data manipulation languages (DML) are available. These findings were supported in meetings held with technical experts from both companies. In both cases, the evidence indicated that either database would provide the features needed to host HPSS.« less
Software Application for Supporting the Education of Database Systems
ERIC Educational Resources Information Center
Vágner, Anikó
2015-01-01
The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…
Fermilab Security Site Access Request Database
Fermilab Security Site Access Request Database Use of the online version of the Fermilab Security Site Access Request Database requires that you login into the ESH&Q Web Site. Note: Only Fermilab generated from the ESH&Q Section's Oracle database on May 27, 2018 05:48 AM. If you have a question
Joint Battlespace Infosphere: Information Management Within a C2 Enterprise
2005-06-01
using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing
2009-01-01
Oracle 9i, 10g MySQL MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server Windows 2000 Server (32 bit...WebStar (Mac OS X) SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server MS SQL Server Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular
Di Chiara, Antonio; Zonzin, Pietro; Pavoni, Daisy; Fioretti, Paolo Maria
2003-06-01
In the era of evidence-based medicine, the monitoring of the adherence to the guidelines is fundamental, in order to verify the diagnostic and therapeutic processes. Informatic paperless databases allow a higher data quality, lower costs and timely analysis with overall advantages over the traditional surveys. The RUTA project (acronym of Triveneto Registry of ANMCO CCUs) was designed in 1999, aiming at creating an informatic network among the coronary care units of a large Italian region, for a permanent survey of patients admitted for acute myocardial infarction. Information ranges from the pre-hospital phase to discharge, including all relevant clinical and management variables. The database uses DBMS Personal Oracle and Power-Builder as user interface, on Windows platform. Anonymous data are sent to a central server.
Performance related issues in distributed database systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.
The intelligent database machine
NASA Technical Reports Server (NTRS)
Yancey, K. E.
1985-01-01
The IDM data base was compared with the data base crack to determine whether IDM 500 would better serve the needs of the MSFC data base management system than Oracle. The two were compared and the performance of the IDM was studied. Implementations that work best on which database are implicated. The choice is left to the database administrator.
NASA Electronic Library System (NELS) database schema, version 1.2
NASA Technical Reports Server (NTRS)
Melebeck, Clovis J.
1991-01-01
The database tables used by NELS version 1.2 are discussed. To provide the current functional capability offered by NELS, nineteen tables were created with ORACLE. Each table lists the ORACLE table name and provides a brief description of the tables intended use or function. The following sections cover four basic categories of tables: NELS object classes, NELS collections, NELS objects, and NELS supplemental tables. Also included in each section is a definition and/or relationship of each field to other fields or tables. The primary key(s) for each table is indicated with a single asterisk (*), while foreign keys are indicated with double asterisks (**). The primary key(s) indicate the key(s) which uniquely identifies a record for that table. The foreign key(s) is used to identify additional information in other table(s) for that record. The two appendices are the command which is used to construct the ORACLE tables for NELS. Appendix A contains the commands which create the tables which are defined in the following sections. Appendix B contains the commands which build the indices for these tables.
Remote online monitoring and measuring system for civil engineering structures
NASA Astrophysics Data System (ADS)
Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł
2009-06-01
In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.
PS3-21: Extracting Utilization Data from Clarity into VDW Using Oracle and SAS
Chimmula, Srivardhan
2013-01-01
Background/Aims The purpose of the presentation is to demonstrate how we use SAS and Oracle to load VDW_Utilization, VDW_DX, and VDW_PX tables from Clarity at the Kaiser Permanente Northern California (KPNC) Division of Research (DOR) site. Methods DOR uses the best of Oracle PL/ SQL and SAS capabilities in building Extract Transform and Load (ETL) processes. These processes extract patient encounter, diagnosis, and procedure data from Teradata-based Clarity. The data is then transformed to fit HMORN’s VDW definitions of the table. This data is then loaded into the Oracle-based VDW table on DOR’s research database and then finally a copy of the table is also created as a SAS dataset. Results DOR builds robust and efficient ETL processes that refresh VDW Utilization table on a monthly basis processing millions of records/observations. The ETL processes have the capability to identify daily changes in Clarity and update the VDW tables on a daily basis. Conclusions KPNC DOR combines the best of both Oracle and SAS worlds to build ETL processes that load the data into VDW Utilization tables efficiently.
ERIC Educational Resources Information Center
Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June
2003-01-01
Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…
FISH Oracle: a web server for flexible visualization of DNA copy number data in a genomic context.
Mader, Malte; Simon, Ronald; Steinbiss, Sascha; Kurtz, Stefan
2011-07-28
The rapidly growing amount of array CGH data requires improved visualization software supporting the process of identifying candidate cancer genes. Optimally, such software should work across multiple microarray platforms, should be able to cope with data from different sources and should be easy to operate. We have developed a web-based software FISH Oracle to visualize data from multiple array CGH experiments in a genomic context. Its fast visualization engine and advanced web and database technology supports highly interactive use. FISH Oracle comes with a convenient data import mechanism, powerful search options for genomic elements (e.g. gene names or karyobands), quick navigation and zooming into interesting regions, and mechanisms to export the visualization into different high quality formats. These features make the software especially suitable for the needs of life scientists. FISH Oracle offers a fast and easy to use visualization tool for array CGH and SNP array data. It allows for the identification of genomic regions representing minimal common changes based on data from one or more experiments. FISH Oracle will be instrumental to identify candidate onco and tumor suppressor genes based on the frequency and genomic position of DNA copy number changes. The FISH Oracle application and an installed demo web server are available at http://www.zbh.uni-hamburg.de/fishoracle.
FISH Oracle: a web server for flexible visualization of DNA copy number data in a genomic context
2011-01-01
Background The rapidly growing amount of array CGH data requires improved visualization software supporting the process of identifying candidate cancer genes. Optimally, such software should work across multiple microarray platforms, should be able to cope with data from different sources and should be easy to operate. Results We have developed a web-based software FISH Oracle to visualize data from multiple array CGH experiments in a genomic context. Its fast visualization engine and advanced web and database technology supports highly interactive use. FISH Oracle comes with a convenient data import mechanism, powerful search options for genomic elements (e.g. gene names or karyobands), quick navigation and zooming into interesting regions, and mechanisms to export the visualization into different high quality formats. These features make the software especially suitable for the needs of life scientists. Conclusions FISH Oracle offers a fast and easy to use visualization tool for array CGH and SNP array data. It allows for the identification of genomic regions representing minimal common changes based on data from one or more experiments. FISH Oracle will be instrumental to identify candidate onco and tumor suppressor genes based on the frequency and genomic position of DNA copy number changes. The FISH Oracle application and an installed demo web server are available at http://www.zbh.uni-hamburg.de/fishoracle. PMID:21884636
Visualization and manipulating the image of a formal data structure (FDS)-based database
NASA Astrophysics Data System (ADS)
Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien
1994-08-01
A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.
Software Engineering Laboratory (SEL) database organization and user's guide, revision 2
NASA Technical Reports Server (NTRS)
Morusiewicz, Linda; Bristow, John
1992-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.
Software Engineering Laboratory (SEL) database organization and user's guide
NASA Technical Reports Server (NTRS)
So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas
1989-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.
Granitto, Matthew; DeWitt, Ed H.; Klein, Terry L.
2010-01-01
This database was initiated, designed, and populated to collect and integrate geochemical data from central Colorado in order to facilitate geologic mapping, petrologic studies, mineral resource assessment, definition of geochemical baseline values and statistics, environmental impact assessment, and medical geology. The Microsoft Access database serves as a geochemical data warehouse in support of the Central Colorado Assessment Project (CCAP) and contains data tables describing historical and new quantitative and qualitative geochemical analyses determined by 70 analytical laboratory and field methods for 47,478 rock, sediment, soil, and heavy-mineral concentrate samples. Most samples were collected by U.S. Geological Survey (USGS) personnel and analyzed either in the analytical laboratories of the USGS or by contract with commercial analytical laboratories. These data represent analyses of samples collected as part of various USGS programs and projects. In addition, geochemical data from 7,470 sediment and soil samples collected and analyzed under the Atomic Energy Commission National Uranium Resource Evaluation (NURE) Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program (henceforth called NURE) have been included in this database. In addition to data from 2,377 samples collected and analyzed under CCAP, this dataset includes archived geochemical data originally entered into the in-house Rock Analysis Storage System (RASS) database (used by the USGS from the mid-1960s through the late 1980s) and the in-house PLUTO database (used by the USGS from the mid-1970s through the mid-1990s). All of these data are maintained in the Oracle-based National Geochemical Database (NGDB). Retrievals from the NGDB and from the NURE database were used to generate most of this dataset. In addition, USGS data that have been excluded previously from the NGDB because the data predate earliest USGS geochemical databases, or were once excluded for programmatic reasons, have been included in the CCAP Geochemical Database and are planned to be added to the NGDB.
A Framework For Fault Tolerance In Virtualized Servers
2016-06-01
effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data
Angelow, Aniela; Schmidt, Matthias; Weitmann, Kerstin; Schwedler, Susanne; Vogt, Hannes; Havemann, Christoph; Hoffmann, Wolfgang
2008-07-01
In our report we describe concept, strategies and implementation of a central biosample and data management (CSDM) system in the three-centre clinical study of the Transregional Collaborative Research Centre "Inflammatory Cardiomyopathy - Molecular Pathogenesis and Therapy" SFB/TR 19, Germany. Following the requirements of high system resource availability, data security, privacy protection and quality assurance, a web-based CSDM was developed based on Java 2 Enterprise Edition using an Oracle database. An efficient and reliable sample documentation system using bar code labelling, a partitioning storage algorithm and an online documentation software was implemented. An online electronic case report form is used to acquire patient-related data. Strict rules for access to the online applications and secure connections are used to account for privacy protection and data security. Challenges for the implementation of the CSDM resided at project, technical and organisational level as well as at staff level.
NASA Astrophysics Data System (ADS)
Sargis, J. C.; Gray, W. A.
1999-03-01
The APWS allows user friendly access to several legacy systems which would normally each demand domain expertise for proper utilization. The generalized model, including objects, classes, strategies and patterns is presented. The core components of the APWS are the Microsoft Windows 95 Operating System, Oracle, Oracle Power Objects, Artificial Intelligence tools, a medical hyperlibrary and a web site. The paper includes a discussion of how could be automated by taking advantage of the expert system, object oriented programming and intelligent relational database tools within the APWS.
Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model
2014-04-01
time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest
Selecting a Relational Database Management System for Library Automation Systems.
ERIC Educational Resources Information Center
Shekhel, Alex; O'Brien, Mike
1989-01-01
Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)
Service Management Database for DSN Equipment
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.
High Capacity Single Table Performance Design Using Partitioning in Oracle or PostgreSQL
2012-03-01
Indicators ( KPIs ) 13 5. Conclusion 14 List of Symbols, Abbreviations, and Acronyms 15 Distribution List 16 iv List of Figures Figure 1. Oracle...Figure 7. Time to seek and return one record. 4. Additional Key Performance Indicators ( KPIs ) In addition to pure response time, there are other...Laboratory ASM Automatic Storage Management CPU central processing unit I/O input/output KPIs key performance indicators OS operating system
Meta-All: a system for managing metabolic pathway information.
Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H
2006-10-23
Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at http://bic-gh.de/meta-all and can be downloaded free of charge and installed locally.
Meta-All: a system for managing metabolic pathway information
Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H
2006-01-01
Background Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. Results We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. Conclusion META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at and can be downloaded free of charge and installed locally. PMID:17059592
Multi-Resolution Playback of Network Trace Files
2015-06-01
a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3
Big data mining: In-database Oracle data mining over hadoop
NASA Astrophysics Data System (ADS)
Kovacheva, Zlatinka; Naydenova, Ina; Kaloyanova, Kalinka; Markov, Krasimir
2017-07-01
Big data challenges different aspects of storing, processing and managing data, as well as analyzing and using data for business purposes. Applying Data Mining methods over Big Data is another challenge because of huge data volumes, variety of information, and the dynamic of the sources. Different applications are made in this area, but their successful usage depends on understanding many specific parameters. In this paper we present several opportunities for using Data Mining techniques provided by the analytical engine of RDBMS Oracle over data stored in Hadoop Distributed File System (HDFS). Some experimental results are given and they are discussed.
Experience in running relational databases on clustered storage
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Potocky, Miroslav
2015-12-01
For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.
Sequential data access with Oracle and Hadoop: a performance comparison
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Canali, Luca; Grancher, Eric
2014-06-01
The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
1989-03-01
deficiencies of the present system. Specific objectives critical in de,’cloping an under- standing of the system are: " Identify all knowledge workers who use...to be negligible. The limited training require- ments are primarily due to the users existing knowledge of tK2 Oracle database system, the Users
2011-09-01
rate Python’s maturity as “High.” Python is nine years old and has been continuously been developed and enhanced since then. During fiscal year 2010...We rate Python’s developer toolkit availability/extensibility as “Yes.” Python runs on a SQL database and is 64 compatible with Oracle database...MODEL...........................................................................11 D. GOAL DEVELOPMENT
Waste Information Management System v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bustamante, David G.; Schade, A. Carl
WIMS is a functional interface to an Oracle database for managing the required regulatory information about the handling of Hazardous Waste. WIMS does not have a component to track Radiological Waste data. And it does not have the ability to manage sensitive information.
2004-03-01
with MySQL . This choice was made because MySQL is open source. Any significant database engine such as Oracle or MS- SQL or even MS Access can be used...10 Figure 6. The DoD vs . Commercial Life Cycle...necessarily be interested in SCADA network security 13. MySQL (Database server) – This station represents a typical data server for a web page
A Data Warehouse to Support Condition Based Maintenance (CBM)
2005-05-01
Application ( VBA ) code sequence to import the original MAST-generated CSV and then create a single output table in DBASE IV format. The DBASE IV format...database architecture (Oracle, Sybase, MS- SQL , etc). This design includes table definitions, comments, specification of table attributes, primary and foreign...built queries and applications. Needs the application developers to construct data views. No SQL programming experience. b. Power Database User - knows
2016-03-01
science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, Amber; Harsch, Tim; Pitt, Julie
2007-08-31
The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.
Review of Spatial-Database System Usability: Recommendations for the ADDNS Project
2007-12-01
basic GIS background information , with a closer look at spatial databases. A GIS is also a computer- based system designed to capture, manage...foundation for deploying enterprise-wide spatial information systems . According to Oracle® [18], it enables accurate delivery of location- based services...Toronto TR 2007-141 Lanter, D.P. (1991). Design of a lineage- based meta-data base for GIS. Cartography and Geographic Information Systems , 18
Integration and management of massive remote-sensing data based on GeoSOT subdivision model
NASA Astrophysics Data System (ADS)
Li, Shuang; Cheng, Chengqi; Chen, Bo; Meng, Li
2016-07-01
Owing to the rapid development of earth observation technology, the volume of spatial information is growing rapidly; therefore, improving query retrieval speed from large, rich data sources for remote-sensing data management systems is quite urgent. A global subdivision model, geographic coordinate subdivision grid with one-dimension integer coding on 2n-tree, which we propose as a solution, has been used in data management organizations. However, because a spatial object may cover several grids, ample data redundancy will occur when data are stored in relational databases. To solve this redundancy problem, we first combined the subdivision model with the spatial array database containing the inverted index. We proposed an improved approach for integrating and managing massive remote-sensing data. By adding a spatial code column in an array format in a database, spatial information in remote-sensing metadata can be stored and logically subdivided. We implemented our method in a Kingbase Enterprise Server database system and compared the results with the Oracle platform by simulating worldwide image data. Experimental results showed that our approach performed better than Oracle in terms of data integration and time and space efficiency. Our approach also offers an efficient storage management system for existing storage centers and management systems.
Secure quantum private information retrieval using phase-encoded queries
NASA Astrophysics Data System (ADS)
Olejnik, Lukasz
2011-08-01
We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.100.230502 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.
Secure quantum private information retrieval using phase-encoded queries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olejnik, Lukasz
We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offersmore » substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.« less
Selecting the Right Courseware for Your Online Learning Program.
ERIC Educational Resources Information Center
O'Mara, Heather
2000-01-01
Presents criteria for selecting courseware for online classes. Highlights include ease of use, including navigation; assessment tools; advantages of Java-enabled courseware; advantages of Oracle databases, including scalability; future possibilities for multimedia technology; and open architecture that will integrate with other systems. (LRW)
NASA Astrophysics Data System (ADS)
Ivankovic, D.; Dadic, V.
2009-04-01
Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.
The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform
NASA Astrophysics Data System (ADS)
Xie, Qingyun
2016-06-01
This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.
Vector and Raster Data Storage Based on Morton Code
NASA Astrophysics Data System (ADS)
Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.
2018-05-01
Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.
2004-06-01
remote databases, has seen little vendor acceptance. Each database ( Oracle , DB2, MySQL , etc.) has its own client- server protocol. Therefore each...existing standards – SQL , X.500/LDAP, FTP, etc. • View information dissemination as selective replication – State-oriented vs . message-oriented...allowing the 8 application to start. The resource management system would serve as a broker to the resources, making sure that resources are not
Acoustic Metadata Management and Transparent Access to Networked Oceanographic Data Sets
2013-09-30
connectivity (ODBC) compliant data source for which drivers are available (e.g. MySQL , Oracle database, Postgres) can now be imported. Implementation...the possibility of speeding data transmission through compression (implemented) or the potential to use alternative data formats such as Java script
MitoNuc: a database of nuclear genes coding for mitochondrial proteins. Update 2002.
Attimonelli, Marcella; Catalano, Domenico; Gissi, Carmela; Grillo, Giorgio; Licciulli, Flavio; Liuni, Sabino; Santamaria, Monica; Pesole, Graziano; Saccone, Cecilia
2002-01-01
Mitochondria, besides their central role in energy metabolism, have recently been found to be involved in a number of basic processes of cell life and to contribute to the pathogenesis of many degenerative diseases. All functions of mitochondria depend on the interaction of nuclear and organelle genomes. Mitochondrial genomes have been extensively sequenced and analysed and data have been collected in several specialised databases. In order to collect information on nuclear coded mitochondrial proteins we developed MitoNuc, a database containing detailed information on sequenced nuclear genes coding for mitochondrial proteins in Metazoa. The MitoNuc database can be retrieved through SRS and is available via the web site http://bighost.area.ba.cnr.it/mitochondriome where other mitochondrial databases developed by our group, the complete list of the sequenced mitochondrial genomes, links to other mitochondrial sites and related information, are available. The MitoAln database, related to MitoNuc in the previous release, reporting the multiple alignments of the relevant homologous protein coding regions, is no longer supported in the present release. In order to keep the links among entries in MitoNuc from homologous proteins, a new field in the database has been defined: the cluster identifier, an alpha numeric code used to identify each cluster of homologous proteins. A comment field derived from the corresponding SWISS-PROT entry has been introduced; this reports clinical data related to dysfunction of the protein. The logic scheme of MitoNuc database has been implemented in the ORACLE DBMS. This will allow the end-users to retrieve data through a friendly interface that will be soon implemented.
Marlow, Neil; Bower, Hannah; Jones, David; Brocklehurst, Peter; Kenyon, Sara; Pike, Katie; Taylor, David; Salt, Alison
2017-01-01
Background Antibiotics used for women in spontaneous preterm labour without overt infection, in contrast to those with preterm rupture of membranes, are associated with altered functional outcomes in their children. Methods From the National Pupil Database, we used Key Stage 2 scores, national test scores in school year 6 at 11 years of age, to explore the hypothesis that erythromycin and co-amoxiclav were associated with poorer educational outcomes within the ORACLE Children Study. Results Anonymised scores for 97% of surviving children born to mothers recruited to ORACLE and resident in England were analysed against treatment group adjusting for key available socio-demographic potential confounders. No association with crude or with adjusted scores for English, mathematics or science was observed by maternal antibiotic group in either women with preterm rupture of membranes or spontaneous preterm labour with intact membranes. While the proportion receiving special educational needs was similar in each group (range 31.6–34.4%), it was higher than the national rate of 19%. Conclusions Despite evidence that antibiotics are associated with increased functional impairment at 7 years, educational test scores and special needs at 11 years of age show no differences between trial groups. Trial registration number ISCRT Number 52995660 (original ORACLE trial number). PMID:27515985
The importance of having an appropriate relational data segmentation in ATLAS
NASA Astrophysics Data System (ADS)
Dimitrov, G.
2015-12-01
In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience.
Assembling proteomics data as a prerequisite for the analysis of large scale experiments
Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleißner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R
2009-01-01
Background Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. Results In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. Conclusion The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk. PMID:19166578
1992-03-01
compile time, ensuring that operations conducted are appropriate for the object type. Each implementation requires a database known as the program...Finnish bank being developed by Nokia • Oil drilling control system managed by Sedco- Forex * Vigile - an industrial installation supervisor project by...user interface and Oracle database backend control. The software is being developed in Ada under DOD-STD-2167 under OS/2. BELGIUM BATS S.A. Project title
Construction of a Linux based chemical and biological information system.
Molnár, László; Vágó, István; Fehér, András
2003-01-01
A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.
An SQL query generator for CLIPS
NASA Technical Reports Server (NTRS)
Snyder, James; Chirica, Laurian
1990-01-01
As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.
NASA Technical Reports Server (NTRS)
Connell, Andrea M.
2011-01-01
The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.
NASA Astrophysics Data System (ADS)
Karpov, A. V.; Yumagulov, E. Z.
2003-05-01
We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.
Efficiently Distributing Component-based Applications Across Wide-Area Environments
2002-01-01
a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart Maintains list of items to be bought by customer...Pet Store tests; and JBoss 3.0.3 with Jetty 4.1.0, for the RUBiS tests) and a sin- gle database server ( Oracle 8.1.7 Enterprise Edition), each running
12 digit Hydrologic Units (HUCs) for EPA Region 2 and surrounding states (Northeastern states, parts of the Great Lakes, Puerto Rico and the USVI) downloaded from the Natural Resources Conservation Service (NRCS) Geospatial Gateway and imported into the EPA Region 2 Oracle/SDE database. This layer reflects 2009 updates to the national Watershed Boundary Database (WBD) that included new boundary data for New York and New Jersey.
Design and implementation of a biomedical image database (BDIM).
Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R
1988-01-01
We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)
Marlow, Neil; Bower, Hannah; Jones, David; Brocklehurst, Peter; Kenyon, Sara; Pike, Katie; Taylor, David; Salt, Alison
2017-03-01
Antibiotics used for women in spontaneous preterm labour without overt infection, in contrast to those with preterm rupture of membranes, are associated with altered functional outcomes in their children. From the National Pupil Database, we used Key Stage 2 scores, national test scores in school year 6 at 11 years of age, to explore the hypothesis that erythromycin and co-amoxiclav were associated with poorer educational outcomes within the ORACLE Children Study. Anonymised scores for 97% of surviving children born to mothers recruited to ORACLE and resident in England were analysed against treatment group adjusting for key available socio-demographic potential confounders. No association with crude or with adjusted scores for English, mathematics or science was observed by maternal antibiotic group in either women with preterm rupture of membranes or spontaneous preterm labour with intact membranes. While the proportion receiving special educational needs was similar in each group (range 31.6-34.4%), it was higher than the national rate of 19%. Despite evidence that antibiotics are associated with increased functional impairment at 7 years, educational test scores and special needs at 11 years of age show no differences between trial groups. ISCRT Number 52995660 (original ORACLE trial number). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Incorporating Oracle on-line space management with long-term archival technology
NASA Technical Reports Server (NTRS)
Moran, Steven M.; Zak, Victor J.
1996-01-01
The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.
Makkar, Steve R; Turner, Tari; Williamson, Anna; Louviere, Jordan; Redman, Sally; Haynes, Abby; Green, Sally; Brennan, Sue
2016-01-14
Evidence-informed policymaking is more likely if organisations have cultures that promote research use and invest in resources that facilitate staff engagement with research. Measures of organisations' research use culture and capacity are needed to assess current capacity, identify opportunities for improvement, and examine the impact of capacity-building interventions. The aim of the current study was to develop a comprehensive system to measure and score organisations' capacity to engage with and use research in policymaking, which we entitled ORACLe (Organisational Research Access, Culture, and Leadership). We used a multifaceted approach to develop ORACLe. Firstly, we reviewed the available literature to identify key domains of organisational tools and systems that may facilitate research use by staff. We interviewed senior health policymakers to verify the relevance and applicability of these domains. This information was used to generate an interview schedule that focused on seven key domains of organisational capacity. The interview was pilot-tested within four Australian policy agencies. A discrete choice experiment (DCE) was then undertaken using an expert sample to establish the relative importance of these domains. This data was used to produce a scoring system for ORACLe. The ORACLe interview was developed, comprised of 23 questions addressing seven domains of organisational capacity and tools that support research use, including (1) documented processes for policymaking; (2) leadership training; (3) staff training; (4) research resources (e.g. database access); and systems to (5) generate new research, (6) undertake evaluations, and (7) strengthen relationships with researchers. From the DCE data, a conditional logit model was estimated to calculate total scores that took into account the relative importance of the seven domains. The model indicated that our expert sample placed the greatest importance on domains (2), (3) and (4). We utilised qualitative and quantitative methods to develop a system to assess and score organisations' capacity to engage with and apply research to policy. Our measure assesses a broad range of capacity domains and identifies the relative importance of these capacities. ORACLe data can be used by organisations keen to increase their use of evidence to identify areas for further development.
Database usage and performance for the Fermilab Run II experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonham, D.; Box, D.; Gallas, E.
2004-12-01
The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less
LHCb Conditions database operation assistance systems
NASA Astrophysics Data System (ADS)
Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.
2012-12-01
The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Gibson, Tara D.; Schuchardt, Karen L.
2008-03-01
Requirements for the provenance store and access API are developed. Existing RDF stores and APIs are evaluated against the requirements and performance benchmarks. The team’s conclusion is to use MySQL as a database backend, with a possible move to Oracle in the near-term future. Both Jena and Sesame’s APIs will be supported, but new code will use the Jena API
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beverly Seyler; John Grube
2004-12-10
Oil and gas have been commercially produced in Illinois for over 100 years. Existing commercial production is from more than fifty-two named pay horizons in Paleozoic rocks ranging in age from Middle Ordovician to Pennsylvanian. Over 3.2 billion barrels of oil have been produced. Recent calculations indicate that remaining mobile resources in the Illinois Basin may be on the order of several billion barrels. Thus, large quantities of oil, potentially recoverable using current technology, remain in Illinois oil fields despite a century of development. Many opportunities for increased production may have been missed due to complex development histories, multiple stackedmore » pays, and commingled production which makes thorough exploitation of pays and the application of secondary or improved/enhanced recovery strategies difficult. Access to data, and the techniques required to evaluate and manage large amounts of diverse data are major barriers to increased production of critical reserves in the Illinois Basin. These constraints are being alleviated by the development of a database access system using a Geographic Information System (GIS) approach for evaluation and identification of underdeveloped pays. The Illinois State Geological Survey has developed a methodology that is being used by industry to identify underdeveloped areas (UDAs) in and around petroleum reservoirs in Illinois using a GIS approach. This project utilizes a statewide oil and gas Oracle{reg_sign} database to develop a series of Oil and Gas Base Maps with well location symbols that are color-coded by producing horizon. Producing horizons are displayed as layers and can be selected as separate or combined layers that can be turned on and off. Map views can be customized to serve individual needs and page size maps can be printed. A core analysis database with over 168,000 entries has been compiled and assimilated into the ISGS Enterprise Oracle database. Maps of wells with core data have been generated. Data from over 1,700 Illinois waterflood units and waterflood areas have been entered into an Access{reg_sign} database. The waterflood area data has also been assimilated into the ISGS Oracle database for mapping and dissemination on the ArcIMS website. Formation depths for the Beech Creek Limestone, Ste. Genevieve Limestone and New Albany Shale in all of the oil producing region of Illinois have been calculated and entered into a digital database. Digital contoured structure maps have been constructed, edited and added to the ILoil website as map layers. This technology/methodology addresses the long-standing constraints related to information access and data management in Illinois by significantly simplifying the laborious process that industry presently must use to identify underdeveloped pay zones in Illinois.« less
A Web-Based Database for Nurse Led Outreach Teams (NLOT) in Toronto.
Li, Shirley; Kuo, Mu-Hsing; Ryan, David
2016-01-01
A web-based system can provide access to real-time data and information. Healthcare is moving towards digitizing patients' medical information and securely exchanging it through web-based systems. In one of Ontario's health regions, Nurse Led Outreach Teams (NLOT) provide emergency mobile nursing services to help reduce unnecessary transfers from long-term care homes to emergency departments. Currently the NLOT team uses a Microsoft Access database to keep track of the health information on the residents that they serve. The Access database lacks scalability, portability, and interoperability. The objective of this study is the development of a web-based database using Oracle Application Express that is easily accessible from mobile devices. The web-based database will allow NLOT nurses to enter and access resident information anytime and from anywhere.
2007-06-01
other databases such as MySQL , Oracle , and Derby will be added to future versions of the program. Setting a factor requires more than changing a single...Non-Penetrating vs . Penetrating Results.............106 a. Coverage...Interaction Profile for D U-2 and C RQ-4 .......................................................89 Figure 59. R-Squared vs . Number of Regression Tree
Call-Center Based Disease Management of Pediatric Asthmatics
2005-04-01
study locations. Purchase peak flow meters. Prepare and reproduce patient education materials, and informed consent work sheets. Contract Oracle data...identified. Electronic peak flow meters have been purchased. Patient education materials and informed consent documents have been reproduced. A web-based...Research Center * Study population identified via military and Foundation Health databases * Electronic peak flow meters purchased * Patient education materials
Software Process Automation: Experiences from the Trenches.
1996-07-01
Integration of problem database Weaver tions) J Process WordPerfect, All-in-One, Oracle, CM Integration of tools Weaver System K Process Framemaker , CM...handle change requests and problem reports. * Autoplan, a project management tool * Framemaker , a document processing system * Worldview, a document...Cadre, Team Work, FrameMaker , some- thing for requirements traceability, their own homegrown scheduling tool, and their own homegrown tool integrator
Clinical Databases for Chest Physicians.
Courtwright, Andrew M; Gabriel, Peter E
2018-04-01
A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
ERIC Educational Resources Information Center
Scarth, John; And Others
1986-01-01
Analysis of teachers' questions, part of the ORACLE (Observation Research and Classroom Learning Evaluation) project research, is examined in detail. Scarth and Hammersley argue that the rules ORACLE uses for identifying different types of questions involve levels of ambiguity and inference that threaten reliability and validity of the study's…
Octree-based indexing for 3D pointclouds within an Oracle Spatial DBMS
NASA Astrophysics Data System (ADS)
Schön, Bianca; Mosa, Abu Saleh Mohammad; Laefer, Debra F.; Bertolotto, Michela
2013-02-01
A large proportion of today's digital datasets have a spatial component. The effective storage and management of which poses particular challenges, especially with light detection and ranging (LiDAR), where datasets of even small geographic areas may contain several hundred million points. While in the last decade 2.5-dimensional data were prevalent, true 3-dimensional data are increasingly commonplace via LiDAR. They have gained particular popularity for urban applications including generation of city-scale maps, baseline data disaster management, and utility planning. Additionally, LiDAR is commonly used for flood plane identification, coastal-erosion tracking, and forest biomass mapping. Despite growing data availability, current spatial information systems do not provide suitable full support for the data's true 3D nature. Consequently, one system is needed to store the data and another for its processing, thereby necessitating format transformations. The work presented herein aims at a more cost-effective way for managing 3D LiDAR data that allows for storage and manipulation within a single system by enabling a new index within existing spatial database management technology. Implementation of an octree index for 3D LiDAR data atop Oracle Spatial 11g is presented, along with an evaluation showing up to an eight-fold improvement compared to the native Oracle R-tree index.
Suggestions for Documenting SOA-Based Systems
2010-09-01
Number FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and...understandability and fo even across an enterprise. Technical reference models (see F (e.g., Oracle database managemen general in nature, and they typica...architectural pattern. CMU/SEI-2010- T Key Aspects of the Architecture unicate something that is important to the stakeholders intaining the system
ERIC Educational Resources Information Center
Kreie, Jennifer; Shannon, James; Mora-Monge, Carlo A.
2011-01-01
Enterprise systems provide companies with centralized data management, business process support and integrated data flow between functional areas. Thanks to academic alliances offered by companies such as SAP, Oracle, Microsoft and others, universities can also take advantage of the integrated features of enterprise system to give business…
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
2012-10-01
higher Java v5Apache Struts v2 Hibernate v2 C3PO SQL*Net client / JDBC Database Server Oracle 10.0.2 Desktop Client Internet Explorer...for mobile Smartphones - A Java -based framework utilizing Apache Struts on the server - Relational database to handle data storage requirements B...technologies are as follows: Technology Use Requirements Java Application Provides the backend application software to drive the PHR-A 7 BEA Web
NASA Electronic Library System (NELS) optimization
NASA Technical Reports Server (NTRS)
Pribyl, William L.
1993-01-01
This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.
Cloud Computing Trace Characterization and Synthetic Workload Generation
2013-03-01
measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps
Evaluation of Marine Corps Manpower Computer Simulation Model
2016-12-01
merit- based promotion selection that is in conjunction with the “up or out” manpower system. To ensure mission accomplishment within M&RA, it is...historical data the MSM pulls from an online Oracle database. Two types of data base pulls occur here: acquiring historical data of manpower pyramid...is based off of the assumption that the historical manpower progression is constant, and therefore is controllable. This unfortunately does not marry
A WebGIS system on the base of satellite data processing system for marine application
NASA Astrophysics Data System (ADS)
Gong, Fang; Wang, Difeng; Huang, Haiqing; Chen, Jianyu
2007-10-01
From 2002 to 2004, a satellite data processing system for marine application had been built up in State Key Laboratory of Satellite Ocean Environment Dynamics (Second Institute of Oceanography, State Oceanic Administration). The system received satellite data from TERRA, AQUA, NOAA-12/15/16/17/18, FY-1D and automatically generated Level3 products and Level4 products(products of single orbit and merged multi-orbits products) deriving from Level0 data, which is controlled by an operational control sub-system. Currently, the products created by this system play an important role in the marine environment monitoring, disaster monitoring and researches. Now a distribution platform has been developed on this foundation, namely WebGIS system for querying and browsing of oceanic remote sensing data. This system is based upon large database system-Oracle. We made use of the space database engine of ArcSDE and other middleware to perform database operation in addition. J2EE frame was adopted as development model, and Oracle 9.2 DBMS as database background and server. Simply using standard browsers(such as IE6.0), users can visit and browse the public service information that provided by system, including browsing for oceanic remote sensing data, and enlarge, contract, move, renew, traveling, further data inquiry, attribution search and data download etc. The system is still under test now. Founding of such a system will become an important distribution platform of Chinese satellite oceanic environment products of special topic and category (including Sea surface temperature, Concentration of chlorophyll, and so on), for the exaltation of satellite products' utilization and promoting the data share and the research of the oceanic remote sensing platform.
Adapting My Weather Impacts Decision Aid (MyWIDA) to Additional Web Application Server Technologies
2015-08-01
Oracle. Further, some datatypes that are used with HSQLDB are not compatible with Oracle. Most notably, the VARBINARY datatype is not compatible...with Oracle, and instead was replaced with the Binary Large Object (BLOB) datatype . Oracle additionally has a different implementation of primary keys
Planning the data transition of a VLDB: a case study
NASA Astrophysics Data System (ADS)
Finken, Shirley J.
1997-02-01
This paper describes the technical and programmatic plans for moving and checking certain data from the IDentification Automated Services (IDAS) system to the new Interstate Identification Index/Federal Bureau of Investigation (III/FBI) Segment database--one of the three components of the Integrated Automated Fingerprint Identification System (IAFIS) being developed by the Federal Bureau of Investigation, Criminal Justice Information Services Division. Transitioning IDAS to III/FBI includes putting the data into an entirely new target database structure (i.e. from IBM VSAM files to ORACLE7 RDBMS tables). Only four IDAS files were transitioned (CCN, CCR, CCA, and CRS), but their total estimated size is at 500 Gb of data. Transitioning of this Very Large Database is planned as two processes.
Information integration for a sky survey by data warehousing
NASA Astrophysics Data System (ADS)
Luo, A.; Zhang, Y.; Zhao, Y.
The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line
NASA Technical Reports Server (NTRS)
Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)
2004-01-01
Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.
Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure
NASA Astrophysics Data System (ADS)
Antolik, L.; Shiro, B.; Friberg, P. A.
2016-12-01
The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.
Integrating Oracle Human Resources with Other Modules
NASA Technical Reports Server (NTRS)
Sparks, Karl; Shope, Shawn
1998-01-01
One of the most challenging aspects of implementing an enterprise-wide business system is achieving integration of the different modules to the satisfaction of diverse customers. The Jet Propulsion Laboratory's (JPL) implementation of the Oracle application suite demonstrates the need to coordinate Oracle Human Resources Management System (HRMS) decision across the Oracle modules.
Quantum partial search for uneven distribution of multiple target items
NASA Astrophysics Data System (ADS)
Zhang, Kun; Korepin, Vladimir
2018-06-01
Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.
LETTER TO THE EDITOR: Optimization of partial search
NASA Astrophysics Data System (ADS)
Korepin, Vladimir E.
2005-11-01
A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm.
IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY
NASA Technical Reports Server (NTRS)
Vos, R. G.
1994-01-01
The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. IAC was developed in 1985.
Exploring the Cost and Functionality of MEDCOM Web Services
2005-10-24
Software Name 24. What backend database software supports your intranet/Internet content? (check all that apply)-. o Oracle o Microsoft SQL Server E0...Department of Defense (DoD) service branches, which funded and deployed an Internet portal, TRICARE Online, to serve as an information conduit between the...public website, the information contained on the intranet is traditionally limited to the members of the hosting command. The local information serves as
1992-12-01
36 V.33. Coe ncint of De minstioi ........................ 37 V3A. F-Raio .................................... 37 V3.5... de ations. Instructions ae defined as lines of code or card images. Thus, a line containin two or mome souce statements counts as one instruction; a...understand the productivity paradox, recall de concept of virtual machines. When a higher level machine groups ogether many instructm of a lower level
NASA Interactive Forms Type Interface - NIFTI
NASA Technical Reports Server (NTRS)
Jain, Bobby; Morris, Bill
2005-01-01
A flexible database query, update, modify, and delete tool was developed that provides an easy interface to Oracle forms. This tool - the NASA interactive forms type interface, or NIFTI - features on-the- fly forms creation, forms sharing among users, the capability to query the database from user-entered criteria on forms, traversal of query results, an ability to generate tab-delimited reports, viewing and downloading of reports to the user s workstation, and a hypertext-based help system. NIFTI is a very powerful ad hoc query tool that was developed using C++, X-Windows by a Motif application framework. A unique tool, NIFTI s capabilities appear in no other known commercial-off-the- shelf (COTS) tool, because NIFTI, which can be launched from the user s desktop, is a simple yet very powerful tool with a highly intuitive, easy-to-use graphical user interface (GUI) that will expedite the creation of database query/update forms. NIFTI, therefore, can be used in NASA s International Space Station (ISS) as well as within government and industry - indeed by all users of the widely disseminated Oracle base. And it will provide significant cost savings in the areas of user training and scalability while advancing the art over current COTS browsers. No COTS browser performs all the functions NIFTI does, and NIFTI is easier to use. NIFTI s cost savings are very significant considering the very large database with which it is used and the large user community with varying data requirements it will support. Its ease of use means that personnel unfamiliar with databases (e.g., managers, supervisors, clerks, and others) can develop their own personal reports. For NASA, a tool such as NIFTI was needed to query, update, modify, and make deletions within the ISS vehicle master database (VMDB), a repository of engineering data that includes an indentured parts list and associated resource data (power, thermal, volume, weight, and the like). Since the VMDB is used both as a collection point for data and as a common repository for engineering, integration, and operations teams, a tool such as NIFTI had to be designed that could expedite the creation of database query/update forms which could then be shared among users.
Stationary states in quantum walk search
NASA Astrophysics Data System (ADS)
PrÅ«sis, Krišjānis; Vihrovs, Jevgěnijs; Wong, Thomas G.
2016-09-01
When classically searching a database, having additional correct answers makes the search easier. For a discrete-time quantum walk searching a graph for a marked vertex, however, additional marked vertices can make the search harder by causing the system to approximately begin in a stationary state, so the system fails to evolve. In this paper, we completely characterize the stationary states, or 1-eigenvectors, of the quantum walk search operator for general graphs and configurations of marked vertices by decomposing their amplitudes into uniform and flip states. This infinitely expands the number of known stationary states and gives an optimization procedure to find the stationary state closest to the initial uniform state of the walk. We further prove theorems on the existence of stationary states, with them conditionally existing if the marked vertices form a bipartite connected component and always existing if nonbipartite. These results utilize the standard oracle in Grover's algorithm, but we show that a different type of oracle prevents stationary states from interfering with the search algorithm.
ADAMS: AIRLAB data management system user's guide
NASA Technical Reports Server (NTRS)
Conrad, C. L.; Ingogly, W. F.; Lauterbach, L. A.
1986-01-01
The AIRLAB Data Management System (ADAMS) is an online environment that supports research at NASA's AIRLAB. ADAMS provides an easy to use interactive interface that eases the task of documenting and managing information about experiments and improves communication among project members. Data managed by ADAMS includes information about experiments, data sets produced, software and hardware available in AIRLAB as well as that used in a particular experiment, and an on-line engineer's notebook. The User's Guide provides an overview of the ADAMS system as well as details of the operations available within ADAMS. A tutorial section takes the user step-by-step through a typical ADAMS session. ADAMS runs under the VAX/VMS operating system and uses the ORACLE database management system and DEC/FMS (the Forms Management System). ADAMS can be run from any VAX connected via DECnet to the ORACLE host VAX. The ADAMS system is designed for simplicity, so interactions within the underlying data management system and communications network are hidden from the user.
CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valassi, A.; /CERN; Bartoldus, R.
The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farmmore » of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.« less
An alternative database approach for management of SNOMED CT and improved patient data queries.
Campbell, W Scott; Pedersen, Jay; McClay, James C; Rao, Praveen; Bastola, Dhundy; Campbell, James R
2015-10-01
SNOMED CT is the international lingua franca of terminologies for human health. Based in Description Logics (DL), the terminology enables data queries that incorporate inferences between data elements, as well as, those relationships that are explicitly stated. However, the ontologic and polyhierarchical nature of the SNOMED CT concept model make it difficult to implement in its entirety within electronic health record systems that largely employ object oriented or relational database architectures. The result is a reduction of data richness, limitations of query capability and increased systems overhead. The hypothesis of this research was that a graph database (graph DB) architecture using SNOMED CT as the basis for the data model and subsequently modeling patient data upon the semantic core of SNOMED CT could exploit the full value of the terminology to enrich and support advanced data querying capability of patient data sets. The hypothesis was tested by instantiating a graph DB with the fully classified SNOMED CT concept model. The graph DB instance was tested for integrity by calculating the transitive closure table for the SNOMED CT hierarchy and comparing the results with transitive closure tables created using current, validated methods. The graph DB was then populated with 461,171 anonymized patient record fragments and over 2.1 million associated SNOMED CT clinical findings. Queries, including concept negation and disjunction, were then run against the graph database and an enterprise Oracle relational database (RDBMS) of the same patient data sets. The graph DB was then populated with laboratory data encoded using LOINC, as well as, medication data encoded with RxNorm and complex queries performed using LOINC, RxNorm and SNOMED CT to identify uniquely described patient populations. A graph database instance was successfully created for two international releases of SNOMED CT and two US SNOMED CT editions. Transitive closure tables and descriptive statistics generated using the graph database were identical to those using validated methods. Patient queries produced identical patient count results to the Oracle RDBMS with comparable times. Database queries involving defining attributes of SNOMED CT concepts were possible with the graph DB. The same queries could not be directly performed with the Oracle RDBMS representation of the patient data and required the creation and use of external terminology services. Further, queries of undefined depth were successful in identifying unknown relationships between patient cohorts. The results of this study supported the hypothesis that a patient database built upon and around the semantic model of SNOMED CT was possible. The model supported queries that leveraged all aspects of the SNOMED CT logical model to produce clinically relevant query results. Logical disjunction and negation queries were possible using the data model, as well as, queries that extended beyond the structural IS_A hierarchy of SNOMED CT to include queries that employed defining attribute-values of SNOMED CT concepts as search parameters. As medical terminologies, such as SNOMED CT, continue to expand, they will become more complex and model consistency will be more difficult to assure. Simultaneously, consumers of data will increasingly demand improvements to query functionality to accommodate additional granularity of clinical concepts without sacrificing speed. This new line of research provides an alternative approach to instantiating and querying patient data represented using advanced computable clinical terminologies. Copyright © 2015 Elsevier Inc. All rights reserved.
76 FR 354 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-04
..., January 11, 2011. Docket Numbers: ER11-2436-000. Applicants: Oracle Energy Services, LLC. Description: Oracle Energy Services, LLC submits tariff filing per 35.12: Oracle Energy Services, LLC Initial Market...
Data Processing on Database Management Systems with Fuzzy Query
NASA Astrophysics Data System (ADS)
Şimşek, Irfan; Topuz, Vedat
In this study, a fuzzy query tool (SQLf) for non-fuzzy database management systems was developed. In addition, samples of fuzzy queries were made by using real data with the tool developed in this study. Performance of SQLf was tested with the data about the Marmara University students' food grant. The food grant data were collected in MySQL database by using a form which had been filled on the web. The students filled a form on the web to describe their social and economical conditions for the food grant request. This form consists of questions which have fuzzy and crisp answers. The main purpose of this fuzzy query is to determine the students who deserve the grant. The SQLf easily found the eligible students for the grant through predefined fuzzy values. The fuzzy query tool (SQLf) could be used easily with other database system like ORACLE and SQL server.
Software Assurance Measurement -- State of the Practice
2013-11-01
quality and productivity. 30+ languages, C/C++, Java , .NET, Oracle, PeopleSoft, SAP, Siebel, Spring, Struts, Hibernate , and all major databases. ChecKing...NET 39 ActionScript 39 Ada 40 C/C++ 40 Java 41 JavaScript 42 Objective-C 42 Opa 42 Packages 42 Perl 42 PHP 42 Python 42 Formal Methods...Suite—A tool for Ada, C, C++, C#, and Java code that comprises various analyses such as architecture checking, interface analyses, and clone detection
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
PylotDB - A Database Management, Graphing, and Analysis Tool Written in Python
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel W.
2012-01-04
PylotDB, written completely in Python, provides a user interface (UI) with which to interact with, analyze, graph data from, and manage open source databases such as MySQL. The UI mitigates the user having to know in-depth knowledge of the database application programming interface (API). PylotDB allows the user to generate various kinds of plots from user-selected data; generate statistical information on text as well as numerical fields; backup and restore databases; compare database tables across different databases as well as across different servers; extract information from any field to create new fields; generate, edit, and delete databases, tables, and fields;more » generate or read into a table CSV data; and similar operations. Since much of the database information is brought under control of the Python computer language, PylotDB is not intended for huge databases for which MySQL and Oracle, for example, are better suited. PylotDB is better suited for smaller databases that might be typically needed in a small research group situation. PylotDB can also be used as a learning tool for database applications in general.« less
CheD: chemical database compilation tool, Internet server, and client for SQL servers.
Trepalin, S V; Yarkov, A V
2001-01-01
An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
Efficiently Distributing Component-Based Applications Across Wide-Area Environments
2002-01-01
Oracle 8.1.7 Enterprise Edition), each running on a dedicated 1GHz dual-processor Pentium III workstation. For the RUBiS tests, we used a MySQL 4.0.12...a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...Beans Catalog Handles read-only queries to product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart
ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (CDC VERSION)
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1994-01-01
This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1989. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.
ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (DEC VAX VERSION)
NASA Technical Reports Server (NTRS)
Frisch, H.
1994-01-01
This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1986. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.
Ground-water conditions between Oracle and Oracle Junction, Pinal County, Arizona
Heindl, L.A.
1955-01-01
The development of the San Manuel copper prospect has greatly increased traffic along State Highway 77. Considerable interest in commercial possibilities along that road has resulted in a request by the Arizona State Land Department for information about the ground-water conditions between Oracle and Oracle Junction. This request came too late for information to be included in a recently completed memorandum report on the occurrence of ground water in the vicinity of Oracle, released in February 1955. These data are presented as a supplement to that report to minimized duplication of statements about the general geologic and hydrologic conditions. The necessary well data and sample descriptions that were not included in the Oracle report are shown in tables 3 and 4. The area discussed in this supplement comprises parts of Tps. 9 and 10 S., Rs. 13, 14, and 15 E., and includes about 90 square miles (fig. 3). The eastern portion overlaps part of the area covered by the earlier report.
Development, deployment and operations of ATLAS databases
NASA Astrophysics Data System (ADS)
Vaniachine, A. V.; Schmitt, J. G. v. d.
2008-07-01
In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.
NASA Astrophysics Data System (ADS)
Zhu, Z.; Bi, J.; Wang, X.; Zhu, W.
2014-02-01
As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data.
Tactical Operations Analysis Support Facility.
1981-05-01
Punch/Reader 2 DMC-11AR DDCMP Micro Processor 2 DMC-11DA Network Link Line Unit 2 DL-11E Async Serial Line Interface 4 Intel IN-1670 448K Words MOS Memory...86 5.3 VIRTUAL PROCESSORS - VAX-11/750 ........................... 89 5.4 A RELATIONAL DATA MANAGEMENT SYSTEM - ORACLE...Central Processing Unit (CPU) is a 16 bit processor for high-speed, real time applications, and for large multi-user, multi- task, time shared
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-30
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-2436-000] Oracle Energy Services, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... proceeding of Oracle Energy Services, LLC's application for market-based rate authority, with an accompanying...
Oracle, a novel PDZ-LIM domain protein expressed in heart and skeletal muscle.
Passier, R; Richardson, J A; Olson, E N
2000-04-01
In order to identify novel genes enriched in adult heart, we performed a subtractive hybridization for genes expressed in mouse heart but not in skeletal muscle. We identified two alternative splicing variants of a novel PDZ-LIM domain protein, which we named Oracle. Both variants contain a PDZ domain at the amino-terminus and three LIM domains at the carboxy-terminus. Highest homology of Oracle was found with the human and rat enigma proteins in the PDZ domain (62 and 61%, respectively) and in the LIM domains (60 and 69%, respectively). By Northern hybridization analysis, we showed that expression is highest in adult mouse heart, low in skeletal muscle and undetectable in other adult mouse tissues. In situ hybridization in mouse embryos confirmed and extended these data by showing high expression of Oracle mRNA in atrial and ventricular myocardial cells from E8.5. From E9.5 low expression of Oracle mRNA was detectable in myotomes. These data suggest a role for Oracle in the early development and function of heart and skeletal muscle.
NYC Reservoirs Watershed Areas (HUC 12)
This NYC Reservoirs Watershed Areas (HUC 12) GIS layer was derived from the 12-Digit National Watershed Boundary Database (WBD) at 1:24,000 for EPA Region 2 and Surrounding States. HUC 12 polygons were selected from the source based on interactively comparing these HUC 12s in our GIS with images of the New York City's Water Supply System Map found at http://www.nyc.gov/html/dep/html/drinking_water/wsmaps_wide.shtml. The 12 digit Hydrologic Units (HUCs) for EPA Region 2 and surrounding states (Northeastern states, parts of the Great Lakes, Puerto Rico and the USVI) are a subset of the National Watershed Boundary Database (WBD), downloaded from the Natural Resources Conservation Service (NRCS) Geospatial Gateway and imported into the EPA Region 2 Oracle/SDE database. This layer reflects 2009 updates to the WBD that included new boundary data for New York and New Jersey.
LHCb experience with LFC replication
NASA Astrophysics Data System (ADS)
Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.
2008-07-01
Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.
Providing R-Tree Support for Mongodb
NASA Astrophysics Data System (ADS)
Xiang, Longgang; Shao, Xiaotian; Wang, Dehao
2016-06-01
Supporting large amounts of spatial data is a significant characteristic of modern databases. However, unlike some mature relational databases, such as Oracle and PostgreSQL, most of current burgeoning NoSQL databases are not well designed for storing geospatial data, which is becoming increasingly important in various fields. In this paper, we propose a novel method to provide R-tree index, as well as corresponding spatial range query and nearest neighbour query functions, for MongoDB, one of the most prevalent NoSQL databases. First, after in-depth analysis of MongoDB's features, we devise an efficient tabular document structure which flattens R-tree index into MongoDB collections. Further, relevant mechanisms of R-tree operations are issued, and then we discuss in detail how to integrate R-tree into MongoDB. Finally, we present the experimental results which show that our proposed method out-performs the built-in spatial index of MongoDB. Our research will greatly facilitate big data management issues with MongoDB in a variety of geospatial information applications.
Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements
NASA Technical Reports Server (NTRS)
Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri
2006-01-01
NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.
Development and validation of the ORACLE score to predict risk of osteoporosis.
Richy, Florent; Deceulaer, Fréderic; Ethgen, Olivier; Bruyère, Olivier; Reginster, Jean-Yves
2004-11-01
To develop and validate a composite index, the Osteoporosis Risk Assessment by Composite Linear Estimate (ORACLE), that includes risk factors and ultrasonometric outcomes to screen for osteoporosis. Two cohorts of postmenopausal women aged 45 years and older participated in the development (n = 407) and the validation (n = 202) of ORACLE. Their bone mineral density was determined by dual energy x-ray absorptiometry and quantitative ultrasonometry (QUS), and their historical and clinical risk factors were assessed (January to June 2003). Logistic regression analysis was used to select significant predictors of bone mineral density, whereas receiver operating characteristic (ROC) analysis was used to assess the discriminatory performance of ORACLE. The final logistic regression model retained 4 biometric or historical variables and 1 ultrasonometric outcome. The ROC areas under the curves (AUCs) for ORACLE were 84% for the prediction of osteoporosis and 78% for low bone mass. A sensitivity of 90% corresponded to a specificity of 50% for identification of women at risk of developing osteoporosis. The corresponding positive and negative predictive values were 86% and 54%, respectively, in the development cohort. In the validation cohort, the AUCs for identification of osteoporosis and low bone mass were 81% and 76% for ORACLE, 69% and 64% for QUS T score, 71% and 68% for QUS ultrasonometric bone profile index, and 76% and 75% for Osteoporosis Self-assessment Tool, respectively. ORACLE had the best discriminatory performance in identifying osteoporosis compared with the other approaches (P < .05). ORACLE exhibited the highest discriminatory properties compared with ultrasonography alone or other previously validated risk indices. It may be helpful to enhance the predictive value of QUS.
Sloane, Elliot; Rosow, Eric; Adam, Joe; Shine, Dave
2005-01-01
The Clinical Engineering (a.k.a. Biomedical Engineering) Department has heretofore lagged in adoption of some of the leading-edge information system tools used in other industries. This present application is part of a DOD-funded SBIR grant to improve the overall management of medical technology, and describes the capabilities that Strategic Graphical Dashboards (SGDs) can afford. This SGD is built on top of an Oracle database, and uses custom-written graphic objects like gauges, fuel tanks, and Geographic Information System (GIS) maps to improve and accelerate decision making.
Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES
NASA Technical Reports Server (NTRS)
Hoerger, J.
1984-01-01
Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.
ATLAS EventIndex general dataflow and monitoring infrastructure
NASA Astrophysics Data System (ADS)
Fernández Casaní, Á.; Barberis, D.; Favareto, A.; García Montoro, C.; González de la Hoz, S.; Hřivnáč, J.; Prokoshin, F.; Salt, J.; Sánchez, J.; Többicke, R.; Yuan, R.; ATLAS Collaboration
2017-10-01
The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast dataset discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome the performance shortcomings detected during production peaks; an object storage approach is instead used to convey the event index information, and messages to signal their location and status. Recent changes in the Producer/Consumer architecture are also presented in detail, as well as the monitoring infrastructure.
MitBASE : a comprehensive and integrated mitochondrial DNA database. The present status
Attimonelli, M.; Altamura, N.; Benne, R.; Brennicke, A.; Cooper, J. M.; D’Elia, D.; Montalvo, A. de; Pinto, B. de; De Robertis, M.; Golik, P.; Knoop, V.; Lanave, C.; Lazowska, J.; Licciulli, F.; Malladi, B. S.; Memeo, F.; Monnerot, M.; Pasimeni, R.; Pilbout, S.; Schapira, A. H. V.; Sloof, P.; Saccone, C.
2000-01-01
MitBASE is an integrated and comprehensive database of mitochondrial DNA data which collects, under a single interface, databases for Plant, Vertebrate, Invertebrate, Human, Protist and Fungal mtDNA and a Pilot database on nuclear genes involved in mitochondrial biogenesis in Saccharomyces cerevisiae. MitBASE reports all available information from different organisms and from intraspecies variants and mutants. Data have been drawn from the primary databases and from the literature; value adding information has been structured, e.g., editing information on protist mtDNA genomes, pathological information for human mtDNA variants, etc. The different databases, some of which are structured using commercial packages (Microsoft Access, File Maker Pro) while others use a flat-file format, have been integrated under ORACLE. Ad hoc retrieval systems have been devised for some of the above listed databases keeping into account their peculiarities. The database is resident at the EBI and is available at the following site: http://www3.ebi.ac.uk/Research/Mitbase/mitbase.pl . The impact of this project is intended for both basic and applied research. The study of mitochondrial genetic diseases and mitochondrial DNA intraspecies diversity are key topics in several biotechnological fields. The database has been funded within the EU Biotechnology programme. PMID:10592207
Tomcat, Oracle & XML Web Archive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cothren, D. C.
2008-01-01
The TOX (Tomcat Oracle & XML) web archive is a foundation for development of HTTP-based applications using Tomcat (or some other servlet container) and an Oracle RDBMS. Use of TOX requires coding primarily in PL/SQL, JavaScript, and XSLT, but also in HTML, CSS and potentially Java. Coded in Java and PL/SQL itself, TOX provides the foundation for more complex applications to be built.
The mass remote sensing image data management based on Oracle InterMedia
NASA Astrophysics Data System (ADS)
Zhao, Xi'an; Shi, Shaowei
2013-07-01
With the development of remote sensing technology, getting the image data more and more, how to apply and manage the mass image data safely and efficiently has become an urgent problem to be solved. According to the methods and characteristics of the mass remote sensing image data management and application, this paper puts forward to a new method that takes Oracle Call Interface and Oracle InterMedia to store the image data, and then takes this component to realize the system function modules. Finally, it successfully takes the VC and Oracle InterMedia component to realize the image data storage and management.
Validation of a fully automated HER2 staining kit in breast cancer.
Moelans, Cathy B; Kibbelaar, Robby E; van den Heuvel, Marius C; Castigliego, Domenico; de Weger, Roel A; van Diest, Paul J
2010-01-01
Testing for HER2 amplification and/or overexpression is currently routine practice to guide Herceptin therapy in invasive breast cancer. At present, HER2 status is most commonly assessed by immunohistochemistry (IHC). Standardization of HER2 IHC assays is of utmost clinical and economical importance. At present, HER2 IHC is most commonly performed with the HercepTest which contains a polyclonal antibody and applies a manual staining procedure. Analytical variability in HER2 IHC testing could be diminished by a fully automatic staining system with a monoclonal antibody. 219 invasive breast cancers were fully automatically stained with the monoclonal antibody-based Oracle HER2 Bond IHC kit and manually with the HercepTest. All cases were tested for amplification with chromogenic in situ hybridization (CISH). HercepTest yielded an overall sharper membrane staining, with less cytoplasmic and stromal background than Oracle in 17% of cases. Overall concordance between both IHC techniques was 89% (195/219) with a kappa value of 0.776 (95% CI 0.698-0.854), indicating a substantial agreement. Most (22/24) discrepancies between HercepTest and Oracle showed a weaker staining for Oracle. Thirteen of the 24 discrepant cases were high-level HER2 amplified by CISH, and in 12 of these HercepTest IHC better reflected gene amplification status. All the 13 HER2 amplified discrepant cases were at least 2+ by HercepTest, while 10/13 of these were at least 2+ for Oracle. Considering CISH as gold standard, sensitivity of HercepTest and Oracle was 91% and 83%, and specificity was 94% and 98%, respectively. Positive and negative predictive values for HercepTest and Oracle were 90% and 95% for HercepTest and 96% and 91% for Oracle, respectively. Fully-automated HER2 staining with the monoclonal antibody in the Oracle kit shows a high level of agreement with manual staining by the polyclonal antibody in the HercepTest. Although Oracle shows in general some more cytoplasmic staining and may be slightly less sensitive in picking up HER2 amplified cases, it shows a higher specificity and may be considered as an alternative method to evaluate the HER2 expression in breast cancer with potentially less analytical variability.
A comparison of database systems for XML-type data.
Risse, Judith E; Leunissen, Jack A M
2010-01-01
In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.
Evolution of the use of relational and NoSQL databases in the ATLAS experiment
NASA Astrophysics Data System (ADS)
Barberis, D.
2016-09-01
The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.
Efficient data management in a large-scale epidemiology research project.
Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang
2012-09-01
This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
An Oracle-based event index for ATLAS
NASA Astrophysics Data System (ADS)
Gallas, E. J.; Dimitrov, G.; Vasileva, P.; Baranowski, Z.; Canali, L.; Dumitru, A.; Formica, A.; ATLAS Collaboration
2017-10-01
The ATLAS Eventlndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS (relational database management system), the services we have built based on this architecture, and our experience with it. We’ve indexed about 26 billion real data events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data with other complementary metadata in ATLAS, the system has been easily extended to perform essential assessments of data integrity and completeness and to identify event duplication, including at what step in processing the duplication occurred.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paff, S. W; Doody, S.
2003-02-25
This paper discusses the challenges associated with creating a data management system for waste tracking at the Advanced Mixed Waste Treatment Plant (AMWTP) at the Idaho National Engineering Lab (INEEL). The waste tracking system combines data from plant automation systems and decision points. The primary purpose of the system is to provide information to enable the plant operators and engineers to assess the risks associated with each container and determine the best method of treating it. It is also used to track the transuranic (TRU) waste containers as they move throughout the various processes at the plant. And finally, themore » goal of the system is to support paperless shipments of the waste to the Waste Isolation Pilot Plant (WIPP). This paper describes the approach, methodologies, the underlying design of the database, and the challenges of creating the Data Management System (DMS) prior to completion of design and construction of a major plant. The system was built utilizing an Oracle database platform, and Oracle Forms 6i in client-server mode. The underlying data architecture is container-centric, with separate tables and objects for each type of analysis used to characterize the waste, including real-time radiography (RTR), non-destructive assay (NDA), head-space gas sampling and analysis (HSGS), visual examination (VE) and coring. The use of separate tables facilitated the construction of automatic interfaces with the analysis instruments that enabled direct data capture. Movements are tracked using a location system describing each waste container's current location and a history table tracking the container's movement history. The movement system is designed to interface both with radio-frequency bar-code devices and the plant's integrated control system (ICS). Collections of containers or information, such as batches, were created across the various types of analyses, which enabled a single, cohesive approach to be developed for verification and validation activities. The DMS includes general system functions, including task lists, electronic signature, non-conformance reports and message systems, that cut vertically across the remaining subsystems. Oracle's security features were utilized to ensure that only authorized users were allowed to log in, and to restrict access to system functionality according to user role.« less
Schema for the LANL infrasound analysis tool, infrapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dannemann, Fransiska Kate; Marcillo, Omar Eduardo
2017-04-14
The purpose of this document is to define the schema used for the operation of the infrasound analysis tool, infrapy. The tables described by this document extend the CSS3.0 or KB core schema to include information required for the operation of infrapy. This document is divided into three sections, the first being this introduction. Section two defines eight new, infrasonic data processing-specific database tables. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams.
A cognitive network for oracle bone characters related to animals
NASA Astrophysics Data System (ADS)
Dress, Andreas; Grünewald, Stefan; Zeng, Zhenbing
2016-01-01
In this paper, we present an analysis of oracle bone characters for animals from a “cognitive” point of view. After some general remarks on oracle-bone characters presented in Sec. 1 and a short outline of the paper in Sec. 2, we collect various oracle-bone characters for animals from published resources in Sec. 3. In the next section, we begin analyzing a group of 60 ancient animal characters from www.zdic.net, a highly acclaimed internet dictionary of Chinese characters that is strictly based on historical sources, and introduce five categories of specific features regarding their (graphical) structure that will be used in Sec. 5 to associate corresponding feature vectors to these characters. In Sec. 6, these feature vectors will be used to investigate their dissimilarity in terms of a family of parameterized distance measures. And in the last section, we apply the SplitsTree method as encoded in the NeighborNet algorithms to construct a corresponding family of dissimilarity-based networks with the intention of elucidating how the ancient Chinese might have perceived the “animal world” in the late bronze age and to demonstrate that these pictographs reflect an intuitive understanding of this world and its inherent structure that predates its classification in the oldest surviving Chinese encyclopedia from approximately the third century BC, the Er Ya, as well as similar classification systems in the West by one to two millennia. We also present an English dictionary of 70 oracle bone characters for animals in Appendix A. In Appendix B, we list various variants of animal characters that were published in the Jia Gu Wen Bian (cf. 甲骨文编, A Complete Collection of Oracle Bone Characters, edited by the Institute of Archaeology of the Chinese Academy of Social Sciences, published by the Zhonghua Book Company in 1965). We recall the frequencies of the 521 most frequent oracle bone characters in Appendix C as reported in [T. Chen, Yin-Shang Jiaguwen Zixing Xitong Zai Yanjiu, (The Structural System of Oracle Inscriptions) (Shanghai Renmin Chubanshe, Shanghai, 2010); Jiaguwen Shiwen Yongzi Pinlü Biao (A Frequency List of Oracle Characters), Center for the Study and Application of Chinese Characters (East China Normal University, Shanghai, 2010), http://www.wenzi.cn/en/default.aspx. And in Appendix D, we list the animals registered in the last five chapters of the Er Ya.
Design and development of a web-based application for diabetes patient data management.
Deo, S S; Deobagkar, D N; Deobagkar, Deepti D
2005-01-01
A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.
Conwell, J L; Creek, K L; Pozzi, A R; Whyte, H M
2001-02-01
The Industrial Hygiene and Safety Group at Los Alamos National Laboratory (LANL) developed a database application known as IH DataView, which manages industrial hygiene monitoring data. IH DataView replaces a LANL legacy system, IHSD, that restricted user access to a single point of data entry needed enhancements that support new operational requirements, and was not Year 2000 (Y2K) compliant. IH DataView features a comprehensive suite of data collection and tracking capabilities. Through the use of Oracle database management and application development tools, the system is Y2K compliant and Web enabled for easy deployment and user access via the Internet. System accessibility is particularly important because LANL operations are spread over 43 square miles, and industrial hygienists (IHs) located across the laboratory will use the system. IH DataView shows promise of being useful in the future because it eliminates these problems. It has a flexible architecture and sophisticated capability to collect, track, and analyze data in easy-to-use form.
Global Combat Support System-Marine Corps Proof-of-Concept for Dashboard Analytics
2014-12-01
The core is modern, commercial-off-the-shelf enterprise resource planning ( ERP ) software (Oracle 11i e-Business Suite). GCSS-MCs design is focused...factor in the decision to implement this new software . GCSS-MC is the technology centerpiece of the Logistics Modernization (LogMod) Program...GCSS-MC is based on the implementation of Oracle e-Business Suite 11i as the core software package. This is the same infrastructure that Oracle
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
Wang, Lan; Kim, Yongdai; Li, Runze
2013-10-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION
Wang, Lan; Kim, Yongdai; Li, Runze
2014-01-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843
Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.
2011-01-01
The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443
State of the art techniques for preservation and reuse of hard copy electrocardiograms.
Lobodzinski, Suave M; Teppner, Ulrich; Laks, Michael
2003-01-01
Baseline examinations and periodic reexaminations in longitudinal population studies, together with ongoing surveillance for morbidity and mortality, provide unique opportunities for seeking ways to enhance the value of electrocardiography (ECG) as an inexpensive and noninvasive tool for prognosis and diagnosis. We used newly developed optical ECG waveform recognition (OEWR) technique capable of extracting raw waveform data from legacy hard copy ECG recording. Hardcopy ECG recordings were scanned and processed by the OEWR algorithm. The extracted ECG datasets were formatted into a newly proposed, vendor-neutral, ECG XML data format. Oracle database was used as a repository for ECG records in XML format. The proposed technique for XML encapsulation of OEWR processed hard copy records resulted in an efficient method for inclusion of paper ECG records into research databases, thus providing their preservation, reuse and accession.
Over 20 years of reaction access systems from MDL: a novel reaction substructure search algorithm.
Chen, Lingran; Nourse, James G; Christie, Bradley D; Leland, Burton A; Grier, David L
2002-01-01
From REACCS, to MDL ISIS/Host Reaction Gateway, and most recently to MDL Relational Chemistry Server, a new product based on Oracle data cartridge technology, MDL's reaction database management and retrieval systems have undergone great changes. The evolution of the system architecture is briefly discussed. The evolution of MDL reaction substructure search (RSS) algorithms is detailed. This article mainly describes a novel RSS algorithm. This algorithm is based on a depth-first search approach and is able to fully and prospectively use reaction specific information, such as reacting center and atom-atom mapping (AAM) information. The new algorithm has been used in the recently released MDL Relational Chemistry Server and allows the user to precisely find reaction instances in databases while minimizing unrelated hits. Finally, the existing and new RSS algorithms are compared with several examples.
Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M
2011-04-01
The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theodore Larrieu, Christopher Slominski, Michele Joyce
2011-03-01
With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly withmore » no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.« less
AirMSPI ORACLES Cloud Droplet Data V001
Atmospheric Science Data Center
2018-05-05
AirMSPI_ORACLES_Cloud_Droplet_Size_and_Cloud_Optical_Depth L2 Derived Geophysical Parameters ... Order: Earthdata Search Parameters: Cloud Optical Depth Cloud Droplet Effective Radius Cloud Droplet ...
Web application for detailed real-time database transaction monitoring for CMS condition data
NASA Astrophysics Data System (ADS)
de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
2012-12-01
In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.
WebDB Component Builder - Lessons Learned
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macedo, C.
2000-02-15
Oracle WebDB is the easiest way to produce web enabled lightweight and enterprise-centric applications. This concept from Oracle has tantalized our taste for simplistic web development by using a purely web based tool that lives nowhere else but in the database. The use of online wizards, templates, and query builders, which produces PL/SQL behind the curtains, can be used straight ''out of the box'' by both novice and seasoned developers. The topic of this presentation will introduce lessons learned by developing and deploying applications built using the WebDB Component Builder in conjunction with custom PL/SQL code to empower a hybridmore » application. There are two kinds of WebDB components: those that display data to end users via reporting, and those that let end users update data in the database via entry forms. The presentation will also discuss various methods within the Component Builder to enhance the applications pushed to the desktop. The demonstrated example is an application entitled HOME (Helping Other's More Effectively) that was built to manage a yearly United Way Campaign effort. Our task was to build an end to end application which could manage approximately 900 non-profit agencies, an average of 4,100 individual contributions, and $1.2 million dollars. Using WebDB, the shell of the application was put together in a matter of a few weeks. However, we did encounter some hurdles that WebDB, in it's stage of infancy (v2.0), could not solve for us directly. Together with custom PL/SQL, WebDB's Component Builder became a powerful tool that enabled us to produce a very flexible hybrid application.« less
O'Grady, Anthony; Allen, David; Happerfield, Lisa; Johnson, Nicola; Provenzano, Elena; Pinder, Sarah E; Tee, Lilian; Gu, Mai; Kay, Elaine W
2010-12-01
Immunohistochemistry (IHC) is used as the frontline assay to determine HER2 status in invasive breast cancer patients. The aim of the study was to compare the performance of the Leica Oracle HER2 Bond IHC System (Oracle) with the current most readily accepted Dako HercepTest (HercepTest), using both commercially validated and modified ASCO/CAP and UK HER2 IHC scoring guidelines. A total of 445 breast cancer samples from 3 international clinical HER2 referral centers were stained with the 2 test systems and scored in a blinded fashion by experienced pathologists. The overall agreement between the 2 tests in a 3×3 (negative, equivocal and positive) analysis shows a concordance of 86.7% and 86.3%, respectively when analyzed using commercially validated and modified ASCO/CAP and UK HER2 IHC scoring guidelines. There is a good concordance between the Oracle and the HercepTest. The advantages of a complete fully automated test such as the Oracle include standardization of key analytical factors and improved turn around time. The implementation of the modified ASCO/CAP and UK HER2 IHC scoring guidelines has minimal effect on either assay interpretation, showing that Oracle can be used as a methodology for accurately determining HER2 IHC status in formalin fixed, paraffin-embedded breast cancer tissue.
Neural substrates of embodied natural beauty and social endowed beauty: An fMRI study.
Zhang, Wei; He, Xianyou; Lai, Siyan; Wan, Juan; Lai, Shuxian; Zhao, Xueru; Li, Darong
2017-08-02
What are the neural mechanisms underlying beauty based on objective parameters and beauty based on subjective social construction? This study scanned participants with fMRI while they performed aesthetic judgments on concrete pictographs and abstract oracle bone scripts. Behavioral results showed both pictographs and oracle bone scripts were judged to be more beautiful when they referred to beautiful objects and positive social meanings, respectively. Imaging results revealed regions associated with perceptual, cognitive, emotional and reward processing were commonly activated both in beautiful judgments of pictographs and oracle bone scripts. Moreover, stronger activations of orbitofrontal cortex (OFC) and motor-related areas were found in beautiful judgments of pictographs, whereas beautiful judgments of oracle bone scripts were associated with putamen activity, implying stronger aesthetic experience and embodied approaching for beauty were elicited by the pictographs. In contrast, only visual processing areas were activated in the judgments of ugly pictographs and negative oracle bone scripts. Results provide evidence that the sense of beauty is triggered by two processes: one based on the objective parameters of stimuli (embodied natural beauty) and the other based on the subjective social construction (social endowed beauty).
A Completely Blind Video Integrity Oracle.
Mittal, Anish; Saad, Michele A; Bovik, Alan C
2016-01-01
Considerable progress has been made toward developing still picture perceptual quality analyzers that do not require any reference picture and that are not trained on human opinion scores of distorted images. However, there do not yet exist any such completely blind video quality assessment (VQA) models. Here, we attempt to bridge this gap by developing a new VQA model called the video intrinsic integrity and distortion evaluation oracle (VIIDEO). The new model does not require the use of any additional information other than the video being quality evaluated. VIIDEO embodies models of intrinsic statistical regularities that are observed in natural vidoes, which are used to quantify disturbances introduced due to distortions. An algorithm derived from the VIIDEO model is thereby able to predict the quality of distorted videos without any external knowledge about the pristine source, anticipated distortions, or human judgments of video quality. Even with such a paucity of information, we are able to show that the VIIDEO algorithm performs much better than the legacy full reference quality measure MSE on the LIVE VQA database and delivers performance comparable with a leading human judgment trained blind VQA model. We believe that the VIIDEO algorithm is a significant step toward making real-time monitoring of completely blind video quality possible.
Kenyon, Sara; Brocklehurst, Peter; Jones, David; Marlow, Neil; Salt, Alison; Taylor, David
2008-04-24
The Medical Research Council (MRC) ORACLE trial evaluated the use of co-amoxiclav 375 mg and/or erythromycin 250 mg in women presenting with preterm rupture of membranes (PROM) ORACLE I or in spontaneous preterm labour (SPL) ORACLE II using a factorial design. The results showed that for women with a singleton baby with PROM the prescription of erythromycin is associated with improvements in short term neonatal outcomes, although co-amoxiclav is associated with prolongation of pregnancy, a significantly higher rate of neonatal necrotising enterocolitis was found in these babies. Prescription of erythromycin is now established practice for women with PROM. For women with SPL antibiotics demonstrated no improvements in short term neonatal outcomes and are not recommended treatment. There is evidence that both these conditions are associated with subclinical infection so perinatal antibiotic administration may reduce the risk of later disabilities, including cerebral palsy, although the risk may be increased through exposure to inflammatory cytokines, so assessment of longer term functional and educational outcomes is appropriate. The MRC ORACLE Children's Study will follow up UK children at age 7 years born to 4809 women with PROM and the 4266 women with SPL enrolled in the earlier ORACLE trials. We will use a parental questionnaire including validated tools to assess disability and behaviour. We will collect the frequency of specific medical conditions: cerebral palsy, epilepsy, respiratory illness including asthma, diabetes, admission to hospital in last year and other diseases, as reported by parents. National standard test results will be collected to assess educational attainment at Key Stage 1 for children in England. This study is designed to investigate whether or not peripartum antibiotics improve health and disability for children at 7 years of age. The ORACLE Trial and Children Study is registered in the Current Controlled Trials registry. ISCRTN 52995660.
Kenyon, Sara; Brocklehurst, Peter; Jones, David; Marlow, Neil; Salt, Alison; Taylor, David
2008-01-01
Background The Medical Research Council (MRC) ORACLE trial evaluated the use of co-amoxiclav 375 mg and/or erythromycin 250 mg in women presenting with preterm rupture of membranes (PROM) ORACLE I or in spontaneous preterm labour (SPL) ORACLE II using a factorial design. The results showed that for women with a singleton baby with PROM the prescription of erythromycin is associated with improvements in short term neonatal outcomes, although co-amoxiclav is associated with prolongation of pregnancy, a significantly higher rate of neonatal necrotising enterocolitis was found in these babies. Prescription of erythromycin is now established practice for women with PROM. For women with SPL antibiotics demonstrated no improvements in short term neonatal outcomes and are not recommended treatment. There is evidence that both these conditions are associated with subclinical infection so perinatal antibiotic administration may reduce the risk of later disabilities, including cerebral palsy, although the risk may be increased through exposure to inflammatory cytokines, so assessment of longer term functional and educational outcomes is appropriate. Methods The MRC ORACLE Children's Study will follow up UK children at age 7 years born to 4809 women with PROM and the 4266 women with SPL enrolled in the earlier ORACLE trials. We will use a parental questionnaire including validated tools to assess disability and behaviour. We will collect the frequency of specific medical conditions: cerebral palsy, epilepsy, respiratory illness including asthma, diabetes, admission to hospital in last year and other diseases, as reported by parents. National standard test results will be collected to assess educational attainment at Key Stage 1 for children in England. Discussion This study is designed to investigate whether or not peripartum antibiotics improve health and disability for children at 7 years of age. Trial registration The ORACLE Trial and Children Study is registered in the Current Controlled Trials registry. ISCRTN 52995660 PMID:18435833
ESTminer: a Web interface for mining EST contig and cluster databases.
Huang, Yecheng; Pumphrey, Janie; Gingle, Alan R
2005-03-01
ESTminer is a Web application and database schema for interactive mining of expressed sequence tag (EST) contig and cluster datasets. The Web interface contains a query frame that allows the selection of contigs/clusters with specific cDNA library makeup or a threshold number of members. The results are displayed as color-coded tree nodes, where the color indicates the fractional size of each cDNA library component. The nodes are expandable, revealing library statistics as well as EST or contig members, with links to sequence data, GenBank records or user configurable links. Also, the interface allows 'queries within queries' where the result set of a query is further filtered by the subsequent query. ESTminer is implemented in Java/JSP and the package, including MySQL and Oracle schema creation scripts, is available from http://cggc.agtec.uga.edu/Data/download.asp agingle@uga.edu.
National launch strategy vehicle data management system
NASA Technical Reports Server (NTRS)
Cordes, David
1990-01-01
The national launch strategy vehicle data management system (NLS/VDMS) was developed as part of the 1990 NASA Summer Faculty Fellowship Program. The system was developed under the guidance of the Engineering Systems Branch of the Information Systems Office, and is intended for use within the Program Development Branch PD34. The NLS/VDMS is an on-line database system that permits the tracking of various launch vehicle configurations within the program development office. The system is designed to permit the definition of new launch vehicles, as well as the ability to display and edit existing launch vehicles. Vehicles can be grouped in logical architectures within the system. Reports generated from this package include vehicle data sheets, architecture data sheets, and vehicle flight rate reports. The topics covered include: (1) system overview; (2) initial system development; (3) supercard hypermedia authoring system; (4) the ORACLE database; and (5) system evaluation.
XRootD popularity on hadoop clusters
NASA Astrophysics Data System (ADS)
Meoni, Marco; Boccali, Tommaso; Magini, Nicolò; Menichetti, Luca; Giordano, Domenico;
2017-10-01
Performance data and metadata of the computing operations at the CMS experiment are collected through a distributed monitoring infrastructure, currently relying on a traditional Oracle database system. This paper shows how to harness Big Data architectures in order to improve the throughput and the efficiency of such monitoring. A large set of operational data - user activities, job submissions, resources, file transfers, site efficiencies, software releases, network traffic, machine logs - is being injected into a readily available Hadoop cluster, via several data streamers. The collected metadata is further organized running fast arbitrary queries; this offers the ability to test several Map&Reduce-based frameworks and measure the system speed-up when compared to the original database infrastructure. By leveraging a quality Hadoop data store and enabling an analytics framework on top, it is possible to design a mining platform to predict dataset popularity and discover patterns and correlations.
Tsou, Ann-Ping; Sun, Yi-Ming; Liu, Chia-Lin; Huang, Hsien-Da; Horng, Jorng-Tzong; Tsai, Meng-Feng; Liu, Baw-Juine
2006-07-01
Identification of transcriptional regulatory sites plays an important role in the investigation of gene regulation. For this propose, we designed and implemented a data warehouse to integrate multiple heterogeneous biological data sources with data types such as text-file, XML, image, MySQL database model, and Oracle database model. The utility of the biological data warehouse in predicting transcriptional regulatory sites of coregulated genes was explored using a synexpression group derived from a microarray study. Both of the binding sites of known transcription factors and predicted over-represented (OR) oligonucleotides were demonstrated for the gene group. The potential biological roles of both known nucleotides and one OR nucleotide were demonstrated using bioassays. Therefore, the results from the wet-lab experiments reinforce the power and utility of the data warehouse as an approach to the genome-wide search for important transcription regulatory elements that are the key to many complex biological systems.
Asynchronous data change notification between database server and accelerator controls system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, W.; Morris, J.; Nemesure, S.
2011-10-10
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less
Building Hierarchical Representations for Oracle Character and Sketch Recognition.
Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui
2016-01-01
In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.
Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan
2011-11-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.
Use of Graph Database for the Integration of Heterogeneous Biological Data.
Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young
2017-03-01
Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.
Use of Graph Database for the Integration of Heterogeneous Biological Data
Yoon, Byoung-Ha; Kim, Seon-Kyu
2017-01-01
Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data. PMID:28416946
MAGIC database and interfaces: an integrated package for gene discovery and expression.
Cordonnier-Pratt, Marie-Michèle; Liang, Chun; Wang, Haiming; Kolychev, Dmitri S; Sun, Feng; Freeman, Robert; Sullivan, Robert; Pratt, Lee H
2004-01-01
The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC) Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs), and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.
Managing Content in a Matter of Minutes
NASA Technical Reports Server (NTRS)
2004-01-01
NASA software created to help scientists expeditiously search and organize their research documents is now aiding compliance personnel, law enforcement investigators, and the general public in their efforts to search, store, manage, and retrieve documents more efficiently. Developed at Ames Research Center, NETMARK software was designed to manipulate vast amounts of unstructured and semi-structured NASA documents. NETMARK is both a relational and object-oriented technology built on an Oracle enterprise-wide database. To ensure easy user access, Ames constructed NETMARK as a Web-enabled platform utilizing the latest in Internet technology. One of the significant benefits of the program was its ability to store and manage mission-critical data.
Rule-based statistical data mining agents for an e-commerce application
NASA Astrophysics Data System (ADS)
Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar
2003-03-01
Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.
NASA Astrophysics Data System (ADS)
Shao, Hongbing
Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.
GEOS-5 During ORACLES: Status Update
NASA Technical Reports Server (NTRS)
da Silva, Arlindo; Longo, Karla
2017-01-01
In this talk we summarize the GEOS-5 capabilities to be deployed during the ORACLES 2016 Campaign. We describe model configuration, data products and web services available. We also discuss the measurement and flight requirements for the GEOS-5 Team.
Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T
2002-10-01
Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.
Fabri-Ruiz, Salomé; Saucède, Thomas; Danis, Bruno; David, Bruno
2017-01-01
Abstract This database includes over 7,100 georeferenced occurrence records of sea urchins (Echinodermata: Echinoidea) obtained from samples collected in the Southern Ocean (+180°W/+180°E; -35°/-78°S) during oceanographic cruises led over 150 years, from 1872 to 2015. Echinoids are common organisms of Southern Ocean benthic communities. A total of 201 species is recorded, which display contrasting depth ranges and distribution patterns across austral provinces and bioregions. Echinoid species show various ecological traits including different nutrition and reproductive strategies. Information on taxonomy, sampling sites, and sampling sources are also made available. Environmental descriptors that are relevant to echinoid ecology are also made available for the study area (-180°W/+180°E; -45°/-78°S) and for the following decades: 1955–1964, 1965–1974, 1975–1984, 1985–1994 and 1995–2012. They were compiled from different sources and transformed to the same grid cell resolution of 0.1° per pixel. We also provide future projections for environmental descriptors established based on the Bio-Oracle database (Tyberghein et al. 2012). PMID:29134013
Ahmadi, Farshid Farnood; Ebadi, Hamid
2009-01-01
3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.
FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.
Mader, Malte; Simon, Ronald; Kurtz, Stefan
2014-03-31
A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.
Demonstration of quantum advantage in machine learning
NASA Astrophysics Data System (ADS)
Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.
2017-04-01
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.
BioWarehouse: a bioinformatics database warehouse toolkit
Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D
2006-01-01
Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315
BioWarehouse: a bioinformatics database warehouse toolkit.
Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D
2006-03-23
This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.
PROTICdb: a web-based application to store, track, query, and compare plant proteome data.
Ferry-Dumazet, Hélène; Houel, Gwenn; Montalent, Pierre; Moreau, Luc; Langella, Olivier; Negroni, Luc; Vincent, Delphine; Lalanne, Céline; de Daruvar, Antoine; Plomion, Christophe; Zivy, Michel; Joets, Johann
2005-05-01
PROTICdb is a web-based application, mainly designed to store and analyze plant proteome data obtained by two-dimensional polyacrylamide gel electrophoresis (2-D PAGE) and mass spectrometry (MS). The purposes of PROTICdb are (i) to store, track, and query information related to proteomic experiments, i.e., from tissue sampling to protein identification and quantitative measurements, and (ii) to integrate information from the user's own expertise and other sources into a knowledge base, used to support data interpretation (e.g., for the determination of allelic variants or products of post-translational modifications). Data insertion into the relational database of PROTICdb is achieved either by uploading outputs of image analysis and MS identification software, or by filling web forms. 2-D PAGE annotated maps can be displayed, queried, and compared through a graphical interface. Links to external databases are also available. Quantitative data can be easily exported in a tabulated format for statistical analyses. PROTICdb is based on the Oracle or the PostgreSQL Database Management System and is freely available upon request at the following URL: http://moulon.inra.fr/ bioinfo/PROTICdb.
NASA Technical Reports Server (NTRS)
Mills, J.; Shope, S.
1998-01-01
Implementing Oracle Human Resources Management System (HRMS) using a customer and integration approach provides the organization an enormous oppurtunity to create a win-win situation for customers, the HR department and the enterprise.
INTERFACING SAS TO ORACLE IN THE UNIX ENVIRONMENT
SAS is an EPA standard data and statistical analysis software package while ORACLE is EPA's standard data base management system software package. RACLE has the advantage over SAS in data retrieval and storage capabilities but has limited data and statistical analysis capability....
Advanced technologies for scalable ATLAS conditions database access on the grid
NASA Astrophysics Data System (ADS)
Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.
2010-04-01
During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context
Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan
2012-01-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720
Demonstration of quantum superiority in learning parity with noise with superconducting qubits
NASA Astrophysics Data System (ADS)
Ristè, Diego; da Silva, Marcus; Ryan, Colm; Cross, Andrew; Smolin, John; Gambetta, Jay; Chow, Jerry; Johnson, Blake
A problem in machine learning is to identify the function programmed in an unknown device, or oracle, having only access to its output. In particular, a parity function computes the parity of a subset of a bit register. We implement an oracle executing parity functions in a five-qubit superconducting processor and compare the performance of a classical and a quantum learner. The classical learner reads the output of multiple oracle calls and uses the results to infer the hidden function. In addition to querying the oracle, the quantum learner can apply coherent rotations on the output register before the readout. We show that, given a target success probability, the quantum approach outperforms the classical one in the number of queries needed. Moreover, this gap increases with readout noise and with the size of the qubit register. This result shows that quantum advantage can already emerge in current systems with a few, noisy qubits. We acknowledge support from IARPA under Contract W911NF-10-1-0324.
FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research
2014-01-01
Background A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. Results We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. Conclusions The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle. PMID:24684958
Patel, Ashokkumar A; Gilbertson, John R; Parwani, Anil V; Dhir, Rajiv; Datta, Milton W; Gupta, Rajnish; Berman, Jules J; Melamed, Jonathan; Kajdacsy-Balla, Andre; Orenstein, Jan; Becich, Michael J
2006-05-05
Advances in molecular biology and growing requirements from biomarker validation studies have generated a need for tissue banks to provide quality-controlled tissue samples with standardized clinical annotation. The NCI Cooperative Prostate Cancer Tissue Resource (CPCTR) is a distributed tissue bank that comprises four academic centers and provides thousands of clinically annotated prostate cancer specimens to researchers. Here we describe the CPCTR information management system architecture, common data element (CDE) development, query interfaces, data curation, and quality control. Data managers review the medical records to collect and continuously update information for the 145 clinical, pathological and inventorial CDEs that the Resource maintains for each case. An Access-based data entry tool provides de-identification and a standard communication mechanism between each group and a central CPCTR database. Standardized automated quality control audits have been implemented. Centrally, an Oracle database has web interfaces allowing multiple user-types, including the general public, to mine de-identified information from all of the sites with three levels of specificity and granularity as well as to request tissues through a formal letter of intent. Since July 2003, CPCTR has offered over 6,000 cases (38,000 blocks) of highly characterized prostate cancer biospecimens, including several tissue microarrays (TMA). The Resource developed a website with interfaces for the general public as well as researchers and internal members. These user groups have utilized the web-tools for public query of summary data on the cases that were available, to prepare requests, and to receive tissues. As of December 2005, the Resource received over 130 tissue requests, of which 45 have been reviewed, approved and filled. Additionally, the Resource implemented the TMA Data Exchange Specification in its TMA program and created a computer program for calculating PSA recurrence. Building a biorepository infrastructure that meets today's research needs involves time and input of many individuals from diverse disciplines. The CPCTR can provide large volumes of carefully annotated prostate tissue for research initiatives such as Specialized Programs of Research Excellence (SPOREs) and for biomarker validation studies and its experience can help development of collaborative, large scale, virtual tissue banks in other organ systems.
Digital conversion of INEL archeological data using ARC/INFO and Oracle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, R.D.; Brizzee, J.; White, L.
1993-11-04
This report documents the procedures used to convert archaeological data for the INEL to digital format, lists the equipment used, and explains the verification and validation steps taken to check data entry. It also details the production of an engineered interface between ARC/INFO and Oracle.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-18
...-backed paperboard and to laminate plastic film (the laminating activity is not ``production'' activity...--Piedmont Triad Area, North Carolina; Notification of Proposed Production Activity; Oracle Flexible..., grantee of FTZ 230, submitted a notification of proposed production activity on behalf of Oracle Flexible...
Test oracle automation for V&V of an autonomous spacecraft's planner
NASA Technical Reports Server (NTRS)
Feather, M. S.; Smith, B.
2001-01-01
We built automation to assist the software testing efforts associated with the Remote Agent experiment. In particular, our focus was upon introducing test oracles into the testing of the planning and scheduling system component. This summary is intended to provide an overview of the work.
The MSG Central Facility - A Mission Control System for Windows NT
NASA Astrophysics Data System (ADS)
Thompson, R.
The MSG Central Facility, being developed by Science Systems for EUMETSAT1, represents the first of a new generation of satellite mission control systems, based on the Windows NT operating system. The system makes use of a range of new technologies to provide an integrated environment for the planning, scheduling, control and monitoring of the entire Meteosat Second Generation mission. It supports packetised TM/TC and uses Science System's Space UNiT product to provide automated operations support at both Schedule (Timeline) and Procedure levels. Flexible access to historical data is provided through an operations archive based on ORACLE Enterprise Server, hosted on a large RAID array and off-line tape jukebox. Event driven real-time data distribution is based on the CORBA standard. Operations preparation and configuration control tools form a fully integrated element of the system.
How to Take HRMS Process Management to the Next Level with Workflow Business Event System
NASA Technical Reports Server (NTRS)
Rajeshuni, Sarala; Yagubian, Aram; Kunamaneni, Krishna
2006-01-01
Oracle Workflow with the Business Event System offers a complete process management solution for enterprises to manage business processes cost-effectively. Using Workflow event messaging, event subscriptions, AQ Servlet and advanced queuing technologies, this presentation will demonstrate the step-by-step design and implementation of system solutions in order to integrate two dissimilar systems and establish communication remotely. As a case study, the presentation walks you through the process of propagating organization name changes in other applications that originated from the HRMS module without changing applications code. The solution can be applied to your particular business cases for streamlining or modifying business processes across Oracle and non-Oracle applications.
Introducing ORACLE: Library Processing in a Multi-User Environment.
ERIC Educational Resources Information Center
Queensland Library Board, Brisbane (Australia).
Currently being developed by the State Library of Queensland, Australia, ORACLE (On-Line Retrieval of Acquisitions, Cataloguing, and Circulation Details for Library Enquiries) is a computerized library system designed to provide rapid processing of library materials in a multi-user environment. It is based on the Australian MARC format and fully…
An ORACLE Chronicle: A Decade of Classroom Research.
ERIC Educational Resources Information Center
Galton, Maurice
1987-01-01
This article describes Project ORACLE which was research carried out at the University of Leicester begun in 1975 concerning (1) a longitudinal process-product study of teaching and learning in elementary schools; and (2) a study which concentrated on collaborative group work in the same classrooms. Results and implications are discussed.…
AirMSPI Level 2 V001 New Data for NASA's ORACLES Campaign
Atmospheric Science Data Center
2018-05-07
AirMSPI Level 2 V001 New Data for NASA's ORACLES Campaign Friday, February 2, 2018 The NASA Langley Atmospheric Sciences Data Center (ASDC) and Jet Propulsion ... ) flight campaign. AirMSPI flies in the nose of NASA's high-altitude ER-2 aircraft. The instrument was built by JPL and the ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-26
... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [B-31-2013] Foreign-Trade Zone 230--Piedmont Triad Area, North Carolina, Authorization of Production Activity, Oracle Flexible Packaging, Inc., (Foil... (FTZ) Board on behalf of Oracle Flexible Packaging, Inc., within Site 28, in Winston-Salem, North...
Development of the updated system of city underground pipelines based on Visual Studio
NASA Astrophysics Data System (ADS)
Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong
2009-10-01
Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.
Java RMI Software Technology for the Payload Planning System of the International Space Station
NASA Technical Reports Server (NTRS)
Bryant, Barrett R.
1999-01-01
The Payload Planning System is for experiment planning on the International Space Station. The planning process has a number of different aspects which need to be stored in a database which is then used to generate reports on the planning process in a variety of formats. This process is currently structured as a 3-tier client/server software architecture comprised of a Java applet at the front end, a Java server in the middle, and an Oracle database in the third tier. This system presently uses CGI, the Common Gateway Interface, to communicate between the user-interface and server tiers and Active Data Objects (ADO) to communicate between the server and database tiers. This project investigated other methods and tools for performing the communications between the three tiers of the current system so that both the system performance and software development time could be improved. We specifically found that for the hardware and software platforms that PPS is required to run on, the best solution is to use Java Remote Method Invocation (RMI) for communication between the client and server and SQLJ (Structured Query Language for Java) for server interaction with the database. Prototype implementations showed that RMI combined with SQLJ significantly improved performance and also greatly facilitated construction of the communication software.
REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*
Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171
REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.
Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.
The CMS dataset bookkeeping service
NASA Astrophysics Data System (ADS)
Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.
2008-07-01
The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.
Design of Instant Messaging System of Multi-language E-commerce Platform
NASA Astrophysics Data System (ADS)
Yang, Heng; Chen, Xinyi; Li, Jiajia; Cao, Yaru
2017-09-01
This paper aims at researching the message system in the instant messaging system based on the multi-language e-commerce platform in order to design the instant messaging system in multi-language environment and exhibit the national characteristics based information as well as applying national languages to e-commerce. In order to develop beautiful and friendly system interface for the front end of the message system and reduce the development cost, the mature jQuery framework is adopted in this paper. The high-performance server Tomcat is adopted at the back end to process user requests, and MySQL database is adopted for data storage to persistently store user data, and meanwhile Oracle database is adopted as the message buffer for system optimization. Moreover, AJAX technology is adopted for the client to actively pull the newest data from the server at the specified time. In practical application, the system has strong reliability, good expansibility, short response time, high system throughput capacity and high user concurrency.
Amin, Waqas; Singh, Harpreet; Pople, Andre K.; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V.; Becich, Michael J.
2010-01-01
Context: Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Design: Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. Result: The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. Conclusion: These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems. PMID:20922029
Amin, Waqas; Singh, Harpreet; Pople, Andre K; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V; Becich, Michael J
2010-08-10
Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems.
IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY
NASA Technical Reports Server (NTRS)
Vos, R. G.
1994-01-01
The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC a
MATHEMATICS PANEL PROGRESS REPORT FOR PERIOD MARCH 1, 1957 TO AUGUST 31, 1958
DOE Office of Scientific and Technical Information (OSTI.GOV)
Householder, A.S.
1959-03-24
ORACLE operation and programming are summarized, and progress is indicated on various current problems. Work is reviewed on numerical analysis, programming, basic mathematics, biometrics and statistics, ORACLE operations and special codes, and training. Publications and lectures for the report period are listed. (For preceding period see ORNL-2283.) (W.D.M.)
Augmenting Oracle Text with the UMLS for enhanced searching of free-text medical reports.
Ding, Jing; Erdal, Selnur; Dhaval, Rakesh; Kamal, Jyoti
2007-10-11
The intrinsic complexity of free-text medical reports imposes great challenges for information retrieval systems. We have developed a prototype search engine for retrieving clinical reports that leverages the powerful indexing and querying capabilities of Oracle Text, and the rich biomedical domain knowledge and semantic structures that are captured in the UMLS Metathesaurus.
An eCK-Secure Authenticated Key Exchange Protocol without Random Oracles
NASA Astrophysics Data System (ADS)
Moriyama, Daisuke; Okamoto, Tatsuaki
This paper presents a (PKI-based) two-pass authenticated key exchange (AKE) protocol that is secure in the extended Canetti-Krawczyk (eCK) security model. The security of the proposed protocol is proven without random oracles (under three assumptions), and relies on no implementation techniques such as a trick by LaMacchia, Lauter and Mityagin (so-called the NAXOS trick). Since an AKE protocol that is eCK-secure under a NAXOS-like implementation trick will be no more eCK-secure if some realistic information leakage occurs through side-channel attacks, it has been an important open problem how to realize an eCK-secure AKE protocol without using the NAXOS tricks (and without random oracles).
Continuous-time quantum search on balanced trees
NASA Astrophysics Data System (ADS)
Philipp, Pascal; Tarrataca, Luís; Boettcher, Stefan
2016-03-01
We examine the effect of network heterogeneity on the performance of quantum search algorithms. To this end, we study quantum search on a tree for the oracle Hamiltonian formulation employed by continuous-time quantum walks. We use analytical and numerical arguments to show that the exponent of the asymptotic running time ˜Nβ changes uniformly from β =0.5 to β =1 as the searched-for site is moved from the root of the tree towards the leaves. These results imply that the time complexity of the quantum search algorithm on a balanced tree is closely correlated with certain path-based centrality measures of the searched-for site.
Fuel conditioning facility zone-to-zone transfer administrative controls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, C. L.
2000-06-21
The administrative controls associated with transferring containers from one criticality hazard control zone to another in the Argonne National Laboratory (ANL) Fuel Conditioning Facility (FCF) are described. FCF, located at the ANL-West site near Idaho Falls, Idaho, is used to remotely process spent sodium bonded metallic fuel for disposition. The process involves nearly forty widely varying material forms and types, over fifty specific use container types, and over thirty distinct zones where work activities occur. During 1999, over five thousand transfers from one zone to another were conducted. Limits are placed on mass, material form and type, and container typesmore » for each zone. Ml material and containers are tracked using the Mass Tracking System (MTG). The MTG uses an Oracle database and numerous applications to manage the database. The database stores information specific to the process, including material composition and mass, container identification number and mass, transfer history, and the operators involved in each transfer. The process is controlled using written procedures which specify the zone, containers, and material involved in a task. Transferring a container from one zone to another is called a zone-to-zone transfer (ZZT). ZZTs consist of four distinct phases, select, request, identify, and completion.« less
The PROTICdb database for 2-DE proteomics.
Langella, Olivier; Zivy, Michel; Joets, Johann
2007-01-01
PROTICdb is a web-based database mainly designed to store and analyze plant proteome data obtained by 2D polyacrylamide gel electrophoresis (2D PAGE) and mass spectrometry (MS). The goals of PROTICdb are (1) to store, track, and query information related to proteomic experiments, i.e., from tissue sampling to protein identification and quantitative measurements; and (2) to integrate information from the user's own expertise and other sources into a knowledge base, used to support data interpretation (e.g., for the determination of allelic variants or products of posttranslational modifications). Data insertion into the relational database of PROTICdb is achieved either by uploading outputs from Mélanie, PDQuest, IM2d, ImageMaster(tm) 2D Platinum v5.0, Progenesis, Sequest, MS-Fit, and Mascot software, or by filling in web forms (experimental design and methods). 2D PAGE-annotated maps can be displayed, queried, and compared through the GelBrowser. Quantitative data can be easily exported in a tabulated format for statistical analyses with any third-party software. PROTICdb is based on the Oracle or the PostgreSQLDataBase Management System (DBMS) and is freely available upon request at http://cms.moulon.inra.fr/content/view/14/44/.
Children's early reading vocabulary: description and word frequency lists.
Stuart, Morag; Dixon, Maureen; Masterson, Jackie; Gray, Bob
2003-12-01
When constructing stimuli for experimental investigations of cognitive processes in early reading development, researchers have to rely on adult or American children's word frequency counts, as no such counts exist for English children. The present paper introduces a database of children's early reading vocabulary, for use by researchers and teachers. Texts from 685 books from reading schemes and story books read by 5-7 year-old children were used in the construction of the database. All words from the 685 books were typed or scanned into an Oracle database. The resulting up-to-date word frequency list of early print exposure in the UK is available in two forms from a website address given in this paper. This allows access to one list of the words ordered alphabetically and one list of the words ordered by frequency. We also briefly address some fundamental issues underlying early reading vocabulary (e.g., that it is heavily skewed towards low frequencies). Other characteristics of the vocabulary are then discussed. We hope the word frequency lists will be of use to researchers seeking to control word frequency, and to teachers interested in the vocabulary to which young children are exposed in their reading material.
User's Manual for the Naval Interactive Data Analysis System-Climatologies (NIDAS-C), Version 2.0
NASA Technical Reports Server (NTRS)
Abbott, Clifton
1996-01-01
This technical note provides the user's manual for the NIDAS-C system developed for the naval oceanographic office. NIDAS-C operates using numerous oceanographic data categories stored in an installed version of the Naval Environmental Operational Nowcast System (NEONS), a relational database management system (rdbms) which employs the ORACLE proprietary rdbms engine. Data management, configuration, and control functions for the supporting rdbms are performed externally. NIDAS-C stores and retrieves data to/from the rdbms but exercises no direct internal control over the rdbms or its configuration. Data is also ingested into the rdbms, for use by NIDAS-C, by external data acquisition processes. The data categories employed by NIDAS-C are as follows: Bathymetry - ocean depth at
Ozone Research with Advanced Cooperative Lidar Experiment (ORACLE) Implementation Study
NASA Technical Reports Server (NTRS)
Stadler, John H.; Browell, Edward V.; Ismail, Syed; Dudelzak, Alexander E.; Ball, Donald J.
1998-01-01
New technological advances have made possible new active remote sensing capabilities from space. Utilizing these technologies, the Ozone Research with Advanced Cooperative Lidar Experiment (ORACLE) will provide high spatial resolution measurements of ozone, clouds and aerosols in the stratosphere and lower troposphere. Simultaneous measurements of ozone, clouds and aerosols will assist in the understanding of global change, atmospheric chemistry and meteorology.
ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL
Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui
2013-01-01
We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities. PMID:24086091
ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.
Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui
2013-06-01
We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.
The Delphic oracle: a multidisciplinary defense of the gaseous vent theory.
Spiller, Henry A; Hale, John R; De Boer, Jelle Z
2002-01-01
Ancient historical references consistently describe an intoxicating gas, produced by a cavern in the ground, as the source of the power at the oracle of Delphi. These ancient writings are supported by a series of associated geological findings. Chemical analysis of the spring waters and travertine deposits at the site show these gases to be the light hydrocarbon gases methane, ethane, and ethylene. The effects of inhaling ethylene, a major anesthetic gas in the mid-20th century, are similar to those described in the ancient writings. We believe the probable cause of the trancelike state of the Priestess (the Pythia) at the oracle of Delphi during her mantic sessions was produced by inhaling ethylene gas or a mixture of ethylene and ethane from a naturally occurring vent of geological origin.
Design and Implementation of the CEBAF Element Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theodore Larrieu, Christopher Slominski, Michele Joyce
2011-10-01
With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, andmore » element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.« less
NASA Astrophysics Data System (ADS)
Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.
2018-04-01
In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.
NASA Technical Reports Server (NTRS)
Redemann, Jens; Wood, R.; Zuidema, P.
2018-01-01
Seasonal biomass burning (BB) in Southern Africa during the Southern hemisphere spring produces almost a third of the Earth's BB aerosol particles. These particles are lofted into the mid-troposphere and transported westward over the South-East (SE) Atlantic, where they interact with one of the three semi-permanent subtropical stratocumulus (Sc) cloud decks in the world. These interactions include adjustments to aerosol-induced solar heating and microphysical effects. The representation of these interactions in climate models remains highly uncertain, because of the scarcity of observational constraints on both, the aerosol and cloud properties, and the governing physical processes. The first deployment of the NASA P-3 and ER-2 aircraft in the ORACLES (ObseRvations of Aerosols Above Clouds and Their IntEractionS) project in August/September of 2016 has started to fill this observational gap by providing an unprecedented look at the SE Atlantic cloud-aerosol system. We provide an overview of the first deployment, highlighting aerosol absorptive and cloud-nucleating properties, their vertical distribution relative to clouds, the locations and degree of aerosol mixing into clouds, cloud changes in response to such mixing, and cloud top stability relationships to the aerosol. We also expect to describe preliminary results of the second ORACLES deployment from Sao Tome and Principe in August 2017. We will make an initial assessment of the differences and similarities of the BB plume and cloud properties as observed from a deployment site near the plume's northern edge. We will conclude with an outlook for the third ORACLES deployment in October 2018.
Eberhardt, Marian V; Kobira, Kanta; Keck, Anna-Sigrid; Juvik, John A; Jeffery, Elizabeth H
2005-09-21
Chemical measures of antioxidant activity within the plant, such as the oxygen radical absorbance capacity (ORAC) assay, have been reported for many plant-based foods. However, the extent to which chemical measures relate to cellular measures of oxidative stress is unclear. The natural variation in the phytochemical content of 22 broccoli genotypes was used to determine correlations among chemical composition (carotenoids, tocopherols and polyphenolics), chemical antioxidant activity (ORAC), and measures of cellular antioxidation [prevention of DNA oxidative damage and of oxidation of the biomarker dichlorofluorescein (DCFH) in HepG2 cells] using hydrophilic and lipophilic extracts of broccoli. For lipophilic extracts, ORAC (ORAC-L) correlated with inhibition of cellular oxidation of DCFH (DCFH-L, r = 0.596, p = 0.006). Also, DNA damage in the presence of the lipophilic extract was negatively correlated with both chemical and cellular measures of antioxidant activity as measured by ORAC-L (r = -0.705, p = 0.015) and DCFH-L (r = -0.671, p = 0.048), respectively. However, no correlations were observed for hydrophilic (-H) extracts, except between polyphenol content and ORAC (ORAC-H; r = 0.778, p < 0.001). Inhibition of cellular oxidation by hydrophilic extracts (DCFH-H) and ORAC-H were approximately 8- and 4-fold greater than DCFH-L and ORAC-L, respectively. Whether ORAC-H has more biological relevance than ORAC-L because of its magnitude or whether ORAC-L bears more biological relevance because it relates to cellular estimates of antioxidant activity remains to be determined. Chemical estimates of antioxidant capacity within the plant may not accurately reflect the complex nature of the full antioxidant activity of broccoli extracts within cells.
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
Quantum computation with classical light: Implementation of the Deutsch-Jozsa algorithm
NASA Astrophysics Data System (ADS)
Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas
2016-05-01
We propose an optical implementation of the Deutsch-Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.
Introduction to TETHYS—an interdisciplinary GIS database for studying continental collisions
NASA Astrophysics Data System (ADS)
Khan, S. D.; Flower, M. F. J.; Sultan, M. I.; Sandvol, E.
2006-05-01
The TETHYS GIS database is being developed as a way to integrate relevant geologic, geophysical, geochemical, geochronologic, and remote sensing data bearing on Tethyan continental plate collisions. The project is predicated on a need for actualistic model 'templates' for interpreting the Earth's geologic record. Because of their time-transgressive character, Tethyan collisions offer 'actualistic' models for features such as continental 'escape', collision-induced upper mantle flow magmatism, and marginal basin opening, associated with modern convergent plate margins. Large integrated geochemical and geophysical databases allow for such models to be tested against the geologic record, leading to a better understanding of continental accretion throughout Earth history. The TETHYS database combines digital topographic and geologic information, remote sensing images, sample-based geochemical, geochronologic, and isotopic data (for pre- and post-collision igneous activity), and data for seismic tomography, shear-wave splitting, space geodesy, and information for plate tectonic reconstructions. Here, we report progress on developing such a database and the tools for manipulating and visualizing integrated 2-, 3-, and 4-d data sets with examples of research applications in progress. Based on an Oracle database system, linked with ArcIMS via ArcSDE, the TETHYS project is an evolving resource for researchers, educators, and others interested in studying the role of plate collisions in the process of continental accretion, and will be accessible as a node of the national Geosciences Cyberinfrastructure Network—GEON via the World-Wide Web and ultra-high speed internet2. Interim partial access to the data and metadata is available at: http://geoinfo.geosc.uh.edu/Tethys/ and http://www.esrs.wmich.edu/tethys.htm. We demonstrate the utility of the TETHYS database in building a framework for lithospheric interactions in continental collision and accretion.
The CMS Data Management System
NASA Astrophysics Data System (ADS)
Giffels, M.; Guo, Y.; Kuznetsov, V.; Magini, N.; Wildish, T.
2014-06-01
The data management elements in CMS are scalable, modular, and designed to work together. The main components are PhEDEx, the data transfer and location system; the Data Booking Service (DBS), a metadata catalog; and the Data Aggregation Service (DAS), designed to aggregate views and provide them to users and services. Tens of thousands of samples have been cataloged and petabytes of data have been moved since the run began. The modular system has allowed the optimal use of appropriate underlying technologies. In this contribution we will discuss the use of both Oracle and NoSQL databases to implement the data management elements as well as the individual architectures chosen. We will discuss how the data management system functioned during the first run, and what improvements are planned in preparation for 2015.
The CMS dataset bookkeeping service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afaq, Anzar,; /Fermilab; Dolgert, Andrew
2007-10-01
The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS ismore » available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.« less
Managing Attribute—Value Clinical Trials Data Using the ACT/DB Client—Server Database System
Nadkarni, Prakash M.; Brandt, Cynthia; Frawley, Sandra; Sayward, Frederick G.; Einbinder, Robin; Zelterman, Daniel; Schacter, Lee; Miller, Perry L.
1998-01-01
ACT/DB is a client—server database application for storing clinical trials and outcomes data, which is currently undergoing initial pilot use. It stores most of its data in entity—attribute—value form. Such data are segregated according to data type to allow indexing by value when possible, and binary large object data are managed in the same way as other data. ACT/DB lets an investigator design a study rapidly by defining the parameters (or attributes) that are to be gathered, as well as their logical grouping for purposes of display and data entry. ACT/DB generates customizable data entry. The data can be viewed through several standard reports as well as exported as text to external analysis programs. ACT/DB is designed to encourage reuse of parameters across multiple studies and has facilities for dictionary search and maintenance. It uses a Microsoft Access client running on Windows 95 machines, which communicates with an Oracle server running on a UNIX platform. ACT/DB is being used to manage the data for seven studies in its initial deployment. PMID:9524347
NASA Astrophysics Data System (ADS)
Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa
2016-04-01
We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.
Quantum Approaches to Logic Circuit Synthesis and Testing
2006-06-01
with n qubits using Octave (Oct), MATLAB (MAT), Blitz++ (B++) and QuIDDPro (QP) with Oracle Design 1. 42 Table 4: Simulating Grover’s...algorithm with n qubits using Octave (Oct), MATLAB (MAT), Blitz++ (B++) and QuIDDPro (QP) with Oracle Design 2. 43 Table 5: Number of Grover iterations...to accurately characterize the effects of gate and systematic error in a quantum circuit that generates remotely entangled EPR pairs. An
The RISC-V Instruction Set Manual Volume 2: Privileged Architecture Version 1.7
2015-05-09
DIG07-10227). Additional support came from Par Lab affiliates Nokia, NVIDIA , Oracle, and Samsung. • Project Isis: DoE Award DE-SC0003624. • ASPIRE...STARnet center funded by the Semiconductor Research Corporation . Additional sup- port from ASPIRE industrial sponsor, Intel, and ASPIRE affiliates...Google, Huawei, Nokia, NVIDIA , Oracle, and Samsung. The content of this paper does not necessarily reflect the position or the policy of the US
Quantum random oracle model for quantum digital signature
NASA Astrophysics Data System (ADS)
Shang, Tao; Lei, Qi; Liu, Jianwei
2016-10-01
The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.
Automatic Summarization as a Combinatorial Optimization Problem
NASA Astrophysics Data System (ADS)
Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki
We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.
A cognitive network for oracle-bone characters related to animals
NASA Astrophysics Data System (ADS)
Dress. Andreas; Grünewald, Stefan; Zeng, Zhenbing
This paper is dedicated to HAO Bailin on the occasion of his eighties birthday, the great scholar and very good friend who never tired to introduce us to the wonderful and complex intricacies of Chinese culture and history.
2013-02-21
support comes from ParLab affiliates National Instruments, Nokia, NVIDIA , Oracle and Samsung, as well as MathWorks. Research is also supported by DOE...affiliates National Instruments, Nokia, NVIDIA , Oracle and Samsung, as well as MathWorks. Research is also supported by DOE grants DE-SC0004938, DE-SC0005136...International Business Machines Company , 1966. [17] S. Toledo. Locality of reference in LU decomposition with partial pivoting. SIAM J. Matrix Anal. Appl., 18
Joint Force Quarterly. Number 20, Autumn/Winter 1998-99
1999-03-01
harm Sparta?” The oracle answered, “Yes, luxury .”4 To the same question about the Armed Forces, the oracle might answer, “Yes, bureaucracy.” Ever since...Forces today as a volunteer, they must be attracted to the opportunities service can provide. Wearing the uniform has never been about money or...vided full consumer price index cost-of-living ad- justments like their predecessors. This variance in retirement programs diminishes the value of ca
Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.
2008-01-01
This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.
Globe Teachers Guide and Photographic Data on the Web
NASA Technical Reports Server (NTRS)
Kowal, Dan
2004-01-01
The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.
2010-01-01
interface, another providing the application logic (a program used to manipulate the data), and a server running Microsoft SQL Server or Oracle RDBMS... Oracle ) • Mysql (Open Source) • Other What application server software will be needed? • Application Server • CGI PHP/Perl (Open Source...are used throughout DoD and serve a variety of functions. While DoD has a codified and institutionalized process for the development of a common set
Li, Xiaojun; Zhu, Lang
2015-07-01
Some oracle bone inscriptions of the Yin-Shang Dynasties were related to the stomatology, including special terms of diseases of the mouth, tongue and teeth which were classified, and proper nouns of some special diseases. Moreover, witch doctors' exploration for the causes of oral diseases, the observation on different stages of oral diseases, and the records of oral disease treatment were also involved. All of these reflected the sprouting stage of stomatology in the Yin-Shang Dynasties in ancient China.
NASA Technical Reports Server (NTRS)
1984-01-01
Boeing Commercial Airplane Company's Flight Control Department engineers relied on Langley developed software package known as ORACLS to develop an advanced control synthesis package for both continuous and discrete control system. Package was used by Boeing for computerized analysis of new system designs. Resulting applications include a multiple input/output control system for the terrain-following navigation equipment of the Air Forces B-1 Bomber, and another for controlling in flight changes of wing camber on an experimental airplane. ORACLS is one of 1,300 computer programs available from COSMIC.
Real-Time Payload Control and Monitoring on the World Wide Web
NASA Technical Reports Server (NTRS)
Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)
1998-01-01
World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.
Equivalence of Szegedy's and coined quantum walks
NASA Astrophysics Data System (ADS)
Wong, Thomas G.
2017-09-01
Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
A general framework to learn surrogate relevance criterion for atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-09-01
Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property
Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen
2010-01-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586
Test-state approach to the quantum search problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sehrawat, Arun; Nguyen, Le Huy; Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 117597
2011-05-15
The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These testmore » states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.« less
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
Has publication of the results of the ORACLE Children Study changed practice in the UK?
Kenyon, S; Pike, K; Jones, D; Brocklehurst, P; Marlow, N; Salt, A; Taylor, D
2010-10-01
To investigate whether publication of the results of the ORACLE Children's Study, a 7-year follow-up of the ORACLE trial, changed practice with regard to the routine prescription of antibiotics to women with preterm rupture of membranes or spontaneous preterm labour (intact membranes). A comparative questionnaire survey of clinical practice in November 2007 (before publication) and March 2009 (after publication). Lead obstetricians for labour wards of all maternity units in the UK. Self-administered questionnaires requested information about the routine prescription of antibiotics to women with either preterm rupture of membranes or spontaneous preterm labour (intact membranes). Change in practice for prescription of antibiotics. The response rate was 166/214 (78%) in 2007 and 158/209 (76%) in 2009. In total, 120 maternity units responded on both occasions. For women with preterm rupture of membranes, 162/214 (98%) in 2007 and 151/158 (96%) in 2009 maternity units reported that they prescribed antibiotics, with the majority using erythromycin (98%). For women with spontaneous preterm labour (intact membranes), 35/166 (21%) in 2007 and 25/158 (16%) in 2009 maternity units reported that they routinely prescribed antibiotics. The findings from units who responded on both occasions are similar. There has been little change in the reported prescription of antibiotics to women with either preterm rupture of membranes or spontaneous preterm labour following publication of the ORACLE Children's Study. This suggests that current practice may require updated guidance.
Fresnel, A; Jarno, P; Burgun, A; Delamarre, D; Denier, P; Cleret, M; Courtin, C; Seka, L P; Pouliquen, B; Cléran, L; Riou, C; Leduff, F; Lesaux, H; Duvauferrier, R; Le Beux, P
1998-01-01
A pedagogical network has been developed at University Hospital of Rennes from 1996. The challenge is to give medical information and informatics tools to all medical students in the clinical wards of the University Hospital. At first, nine wards were connected to the medical school server which is linked to the Internet. Client software electronic mail and WWW Netscape on Macintosh computers. Sever software is set up on Unix SUN providing a local homepage with selected pedagogical resources. These documents are stored in a DBMS database ORACLE and queries can be provided by specialty, authors or disease. The students can access a set of interactive teaching programs or electronic textbooks and can explore the Internet through the library information system and search engines. The teachers can send URL and indexation of pedagogical documents and can produce clinical cases: the database updating will be done by the users. This experience of using Web tools generated enthusiasm when we first introduced it to students. The evaluation shows that if the students can use this training early on, they will adapt the resources of the Internet to their own needs.
CRYSTMET—The NRCC Metals Crystallographic Data File
Wood, Gordon H.; Rodgers, John R.; Gough, S. Roger; Villars, Pierre
1996-01-01
CRYSTMET is a computer-readable database of critically evaluated crystallographic data for metals (including alloys, intermetallics and minerals) accompanied by pertinent chemical, physical and bibliographic information. It currently contains about 60 000 entries and covers the literature exhaustively from 1913. Scientific editing of the abstracted entries, consisting of numerous automated and manual checks, is done to ensure consistency with related, previously published studies, to assign structure types where necessary and to help guarantee the accuracy of the data and related information. Analyses of the entries and their distribution across key journals as a function of time show interesting trends in the complexity of the compounds studied as well as in the elements they contain. Two applications of CRYSTMET are the identification of unknowns and the prediction of properties of materials. CRYSTMET is available either online or via license of a private copy from the Canadian Scientific Numeric Database Service (CAN/SND). The indexed online search and analysis system is easy and economical to use yet fast and powerful. Development of a new system is under way combining the capabilities of ORACLE with the flexibility of a modern interface based on the Netscape browsing tool. PMID:27805157
Designing the Search Service for Enterprise Portal based on Oracle Universal Content Management
NASA Astrophysics Data System (ADS)
Bauer, K. S.; Kuznetsov, D. Y.; Pominov, A. D.
2017-01-01
Enterprise Portal is an important part of an organization in informative and innovative space. The portal provides collaboration between employees and the organization. This article gives a valuable background of Enterprise Portal and technologies. The paper presents Oracle WebCenter Portal and UCM Server integration in detail. The focus is on tools for Enterprise Portal and on Search Service in particular. The paper also presents several UML diagrams to describe the use of cases for Search Service and main components of this application.
Monitoring of services with non-relational databases and map-reduce framework
NASA Astrophysics Data System (ADS)
Babik, M.; Souto, F.
2012-12-01
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.
NASA Astrophysics Data System (ADS)
Castagnoli, Giuseppe
2017-05-01
The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete as it lacks the initial measurement. We extend it to the process of setting the problem. An initial measurement selects a problem setting at random, and a unitary transformation sends it into the desired setting. The extended representation must be with respect to Bob, the problem setter, and any external observer. It cannot be with respect to Alice, the problem solver. It would tell her the problem setting and thus the solution of the problem implicit in it. In the representation to Alice, the projection of the quantum state due to the initial measurement should be postponed until the end of the quantum algorithm. In either representation, there is a unitary transformation between the initial and final measurement outcomes. As a consequence, the final measurement of any ℛ-th part of the solution could select back in time a corresponding part of the random outcome of the initial measurement; the associated projection of the quantum state should be advanced by the inverse of that unitary transformation. This, in the representation to Alice, would tell her, before she begins her problem solving action, that part of the solution. The quantum algorithm should be seen as a sum over classical histories in each of which Alice knows in advance one of the possible ℛ-th parts of the solution and performs the oracle queries still needed to find it - this for the value of ℛ that explains the algorithm's speedup. We have a relation between retrocausality ℛ and the number of oracle queries needed to solve an oracle problem quantumly. All the oracle problems examined can be solved with any value of ℛ up to an upper bound attained by the optimal quantum algorithm. This bound is always in the vicinity of 1/2 . Moreover, ℛ =1/2 always provides the order of magnitude of the number of queries needed to solve the problem in an optimal quantum way. If this were true for any oracle problem, as plausible, it would solve the quantum query complexity problem.
NASA Astrophysics Data System (ADS)
Segal-Rosenhaimer, M.; Knobelspiesse, K. D.; Redemann, J.; Cairns, B.; Alexandrov, M. D.
2016-12-01
The ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) campaign is taking place in the South-East Atlantic during the Austral Spring for three consecutive years from 2016-2018. The study area encompasses one of the Earth's three semi-permanent subtropical Stratocumulus (Sc) cloud decks, and experiences very large aerosol optical depths, mainly biomass burning, originating from Africa. Over time, cloud optical depth (COD), lifetime and cloud microphysics (number concentration, effective radii Reff and precipitation) are expected to be influenced by indirect aerosol effects. These changes play a key role in the energetic balance of the region, and are part of the core investigation objectives of the ORACLES campaign, which acquires measurements of clean and polluted scenes of above cloud aerosols (ACA). Simultaneous retrievals of aerosol and cloud optical properties are being developed (e.g. MODIS, OMI), but still challenging, especially for passive, single viewing angle instruments. By comparison, multiangle polarimetric instruments like RSP (Research Scanning Polarimeter) show promise for detection and quantification of ACA, however, there are no operational retrieval algorithms available yet. Here we describe a new algorithm to retrieve cloud and aerosol optical properties from observations by RSP flown on the ER-2 and P-3 during the 2016 ORACLES campaign. The algorithm is based on training a NN, and is intended to retrieve aerosol and cloud properties simultaneously. However, the first step was to establish the retrieval scheme for low level Sc cloud optical properties. The NN training was based on simulated RSP total and polarized radiances for a range of COD, Reff, and effective variances, spanning 7 wavelength bands and 152 viewing zenith angles. Random and correlated noise were added to the simulations to achieve a more realistic representation of the signals. Before introducing the input variables to the network, the signals are projected on a principle component plane that retains the maximal signal information but minimizes the noise contribution. We will discuss parameter choices for the network and present preliminary results of cloud retrievals from ORACLES, compared with standard RSP low-level cloud retrieval method that has been validated against in situ observations.
Design and Development of a Web-Based Self-Monitoring System to Support Wellness Coaching.
Zarei, Reza; Kuo, Alex
2017-01-01
We analyzed, designed and deployed a web-based, self-monitoring system to support wellness coaching. A wellness coach can plan for clients' exercise and diet through the system and is able to monitor the changes in body dimensions and body composition that the client reports. The system can also visualize the client's data in form of graphs for both the client and the coach. Both parties can also communicate through the messaging feature embedded in the application. A reminder system is also incorporated into the system and sends reminder messages to the clients when their reporting is due. The web-based self-monitoring application uses Oracle 11g XE as the backend database and Application Express 4.2 as user interface development tool. The system allowed users to access, update and modify data through web browser anytime, anywhere, and on any device.
Processing of the WLCG monitoring data using NoSQL
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.
Archiving and Distributing Seismic Data at the Southern California Earthquake Data Center (SCEDC)
NASA Astrophysics Data System (ADS)
Appel, V. L.
2002-12-01
The Southern California Earthquake Data Center (SCEDC) archives and provides public access to earthquake parametric and waveform data gathered by the Southern California Seismic Network and since January 1, 2001, the TriNet seismic network, southern California's earthquake monitoring network. The parametric data in the archive includes earthquake locations, magnitudes, moment-tensor solutions and phase picks. The SCEDC waveform archive prior to TriNet consists primarily of short-period, 100-samples-per-second waveforms from the SCSN. The addition of the TriNet array added continuous recordings of 155 broadband stations (20 samples per second or less), and triggered seismograms from 200 accelerometers and 200 short-period instruments. Since the Data Center and TriNet use the same Oracle database system, new earthquake data are available to the seismological community in near real-time. Primary access to the database and waveforms is through the Seismogram Transfer Program (STP) interface. The interface enables users to search the database for earthquake information, phase picks, and continuous and triggered waveform data. Output is available in SAC, miniSEED, and other formats. Both the raw counts format (V0) and the gain-corrected format (V1) of COSMOS (Consortium of Organizations for Strong-Motion Observation Systems) are now supported by STP. EQQuest is an interface to prepackaged waveform data sets for select earthquakes in Southern California stored at the SCEDC. Waveform data for large-magnitude events have been prepared and new data sets will be available for download in near real-time following major events. The parametric data from 1981 to present has been loaded into the Oracle 9.2.0.1 database system and the waveforms for that time period have been converted to mSEED format and are accessible through the STP interface. The DISC optical-disk system (the "jukebox") that currently serves as the mass-storage for the SCEDC is in the process of being replaced with a series of inexpensive high-capacity (1.6 Tbyte) magnetic-disk RAIDs. These systems are built with PC-technology components, using 16 120-Gbyte IDE disks, hot-swappable disk trays, two RAID controllers, dual redundant power supplies and a Linux operating system. The system is configured over a private gigabit network that connects to the two Data Center servers and spans between the Seismological Lab and the USGS. To ensure data integrity, each RAID disk system constantly checks itself against its twin and verifies file integrity using 128-bit MD5 file checksums that are stored separate from the system. The final level of data protection is a Sony AIT-3 tape backup of the files. The primary advantage of the magnetic-disk approach is faster data access because magnetic disk drives have almost no latency. This means that the SCEDC can provide better "on-demand" interactive delivery of the seismograms in the archive.
Irreconcilable difference between quantum walks and adiabatic quantum computing
NASA Astrophysics Data System (ADS)
Wong, Thomas G.; Meyer, David A.
2016-06-01
Continuous-time quantum walks and adiabatic quantum evolution are two general techniques for quantum computing, both of which are described by Hamiltonians that govern their evolutions by Schrödinger's equation. In the former, the Hamiltonian is fixed, while in the latter, the Hamiltonian varies with time. As a result, their formulations of Grover's algorithm evolve differently through Hilbert space. We show that this difference is fundamental; they cannot be made to evolve along each other's path without introducing structure more powerful than the standard oracle for unstructured search. For an adiabatic quantum evolution to evolve like the quantum walk search algorithm, it must interpolate between three fixed Hamiltonians, one of which is complex and introduces structure that is stronger than the oracle for unstructured search. Conversely, for a quantum walk to evolve along the path of the adiabatic search algorithm, it must be a chiral quantum walk on a weighted, directed star graph with structure that is also stronger than the oracle for unstructured search. Thus, the two techniques, although similar in being described by Hamiltonians that govern their evolution, compute by fundamentally irreconcilable means.
NASA Astrophysics Data System (ADS)
Gayley, Kenneth
2013-06-01
The predictions of the famous Greek oracle of Delphi were just ambiguous enough to seem to convey information, yet the user was only seeing their own thoughts. Are there ways in which X-ray spectral analysis is like that oracle? It is shown using heuristic, generic response functions to mimic actual spectral inversion that the widely known ill conditioning, which makes formal inversion impossible in the presence of random noise, also makes a wide variety of different source distributions (DEMs) produce quite similar X-ray continua and resonance-line fluxes. Indeed, the sole robustly inferable attribute for a thermal, optically thin resonance-line spectrum with normal abundances in CIE is its average temperature. The shape of the DEM distribution, on the other hand, is not well constrained, and may actually depend more on the analysis method, no matter how sophisticated, than on the source plasma. The case is made that X-ray spectra can tell us average temperature, and metallicity, and absorbing column, but the main thing it cannot tell us is the main thing it is most often used to infer: the differential emission measure distribution.
Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms
NASA Astrophysics Data System (ADS)
Johansson, Niklas; Larsson, Jan-Åke
2017-09-01
A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.
Solution to the satisfiability problem using a complete Grover search with trapped ions
NASA Astrophysics Data System (ADS)
Yang, Wan-Li; Wei, Hua; Zhou, Fei; Chang, Weng-Long; Feng, Mang
2009-07-01
The main idea in the original Grover search (1997 Phys. Rev. Lett. 79 325) is to single out a target state containing the solution to a search problem by amplifying the amplitude of the state, following the Oracle's job, i.e., a black box giving us information about the target state. We design quantum circuits to accomplish a complete Grover search involving both the Oracle's job and the amplification of the target state, which are employed to solve satisfiability (SAT) problems. We explore how to carry out the quantum circuits with currently available ion-trap quantum computing technology.
BioMart Central Portal: an open database network for the biological community
Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507
2011-01-01
Background Laboratory Information Management Systems (LIMS) are an increasingly important part of modern laboratory infrastructure. As typically very sophisticated software products, LIMS often require considerable resources to select, deploy and maintain. Larger organisations may have access to specialist IT support to assist with requirements elicitation and software customisation, however smaller groups will often have limited IT support to perform the kind of iterative development that can resolve the difficulties that biologists often have when specifying requirements. Translational medicine aims to accelerate the process of treatment discovery by bringing together multiple disciplines to discover new approaches to treating disease, or novel applications of existing treatments. The diverse set of disciplines and complexity of processing procedures involved, especially with the use of high throughput technologies, bring difficulties in customizing a generic LIMS to provide a single system for managing sample related data within a translational medicine research setting, especially where limited IT support is available. Results We have designed and developed a LIMS, BonsaiLIMS, around a very simple data model that can be easily implemented using a variety of technologies, and can be easily extended as specific requirements dictate. A reference implementation using Oracle 11 g database and the Python framework, Django is presented. Conclusions By focusing on a minimal feature set and a modular design we have been able to deploy the BonsaiLIMS system very quickly. The benefits to our institute have been the avoidance of the prolonged implementation timescales, budget overruns, scope creep, off-specifications and user fatigue issues that typify many enterprise software implementations. The transition away from using local, uncontrolled records in spreadsheet and paper formats to a centrally held, secured and backed-up database brings the immediate benefits of improved data visibility, audit and overall data quality. The open-source availability of this software allows others to rapidly implement a LIMS which in itself might sufficiently address user requirements. In situations where this software does not meet requirements, it can serve to elicit more accurate specifications from end-users for a more heavyweight LIMS by acting as a demonstrable prototype. PMID:21569484
CruiseViewer: SIOExplorer Graphical Interface to Metadata and Archives.
NASA Astrophysics Data System (ADS)
Sutton, D. W.; Helly, J. J.; Miller, S. P.; Chase, A.; Clark, D.
2002-12-01
We are introducing "CruiseViewer" as a prototype graphical interface for the SIOExplorer digital library project, part of the overall NSF National Science Digital Library (NSDL) effort. When complete, CruiseViewer will provide access to nearly 800 cruises, as well as 100 years of documents and images from the archives of the Scripps Institution of Oceanography (SIO). The project emphasizes data object accessibility, a rich metadata format, efficient uploading methods and interoperability with other digital libraries. The primary function of CruiseViewer is to provide a human interface to the metadata database and to storage systems filled with archival data. The system schema is based on the concept of an "arbitrary digital object" (ADO). Arbitrary in that if the object can be stored on a computer system then SIOExplore can manage it. Common examples are a multibeam swath bathymetry file, a .pdf cruise report, or a tar file containing all the processing scripts used on a cruise. We require a metadata file for every ADO in an ascii "metadata interchange format" (MIF), which has proven to be highly useful for operability and extensibility. Bulk ADO storage is managed using the Storage Resource Broker, SRB, data handling middleware developed at the San Diego Supercomputer Center that centralizes management and access to distributed storage devices. MIF metadata are harvested from several sources and housed in a relational (Oracle) database. For CruiseViewer, cgi scripts resident on an Apache server are the primary communication and service request handling tools. Along with the CruiseViewer java application, users can query, access and download objects via a separate method that operates through standard web browsers, http://sioexplorer.ucsd.edu. Both provide the functionability to query and view object metadata, and select and download ADOs. For the CruiseViewer application Java 2D is used to add a geo-referencing feature that allows users to select basemap images and have vector shapes representing query results mapped over the basemap in the image panel. The two methods together address a wide range of user access needs and will allow for widespread use of SIOExplorer.
Bath, Timothy G; Bozdag, Selcuk; Afzal, Vackar; Crowther, Daniel
2011-05-13
Laboratory Information Management Systems (LIMS) are an increasingly important part of modern laboratory infrastructure. As typically very sophisticated software products, LIMS often require considerable resources to select, deploy and maintain. Larger organisations may have access to specialist IT support to assist with requirements elicitation and software customisation, however smaller groups will often have limited IT support to perform the kind of iterative development that can resolve the difficulties that biologists often have when specifying requirements. Translational medicine aims to accelerate the process of treatment discovery by bringing together multiple disciplines to discover new approaches to treating disease, or novel applications of existing treatments. The diverse set of disciplines and complexity of processing procedures involved, especially with the use of high throughput technologies, bring difficulties in customizing a generic LIMS to provide a single system for managing sample related data within a translational medicine research setting, especially where limited IT support is available. We have designed and developed a LIMS, BonsaiLIMS, around a very simple data model that can be easily implemented using a variety of technologies, and can be easily extended as specific requirements dictate. A reference implementation using Oracle 11 g database and the Python framework, Django is presented. By focusing on a minimal feature set and a modular design we have been able to deploy the BonsaiLIMS system very quickly. The benefits to our institute have been the avoidance of the prolonged implementation timescales, budget overruns, scope creep, off-specifications and user fatigue issues that typify many enterprise software implementations. The transition away from using local, uncontrolled records in spreadsheet and paper formats to a centrally held, secured and backed-up database brings the immediate benefits of improved data visibility, audit and overall data quality. The open-source availability of this software allows others to rapidly implement a LIMS which in itself might sufficiently address user requirements. In situations where this software does not meet requirements, it can serve to elicit more accurate specifications from end-users for a more heavyweight LIMS by acting as a demonstrable prototype.
A Python object-oriented framework for the CMS alignment and calibration data
NASA Astrophysics Data System (ADS)
Dawes, Joshua H.; CMS Collaboration
2017-10-01
The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding data-taking period. During the second run of the LHC, new use cases were defined. For the consumption of Conditions Metadata, no common interface existed for the detector experts to use in Python-based custom scripts, resulting in many different querying and transaction management patterns. A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. The tool provides mechanisms for segmenting large sets of conditions while releasing them to the production database, allows for uniform error reporting to the client-side from the server-side and optimizes the data transfer to the server. The architecture of the new service has been developed exploiting many of the features made available by the metadata consumption framework to implement the required improvements. This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies.
Storage and distribution of pathology digital images using integrated web-based viewing systems.
Marchevsky, Alberto M; Dulbandzhyan, Ronda; Seely, Kevin; Carey, Steve; Duncan, Raymond G
2002-05-01
Health care providers have expressed increasing interest in incorporating digital images of gross pathology specimens and photomicrographs in routine pathology reports. To describe the multiple technical and logistical challenges involved in the integration of the various components needed for the development of a system for integrated Web-based viewing, storage, and distribution of digital images in a large health system. An Oracle version 8.1.6 database was developed to store, index, and deploy pathology digital photographs via our Intranet. The database allows for retrieval of images by patient demographics or by SNOMED code information. The Intranet of a large health system accessible from multiple computers located within the medical center and at distant private physician offices. The images can be viewed using any of the workstations of the health system that have authorized access to our Intranet, using a standard browser or a browser configured with an external viewer or inexpensive plug-in software, such as Prizm 2.0. The images can be printed on paper or transferred to film using a digital film recorder. Digital images can also be displayed at pathology conferences by using wireless local area network (LAN) and secure remote technologies. The standardization of technologies and the adoption of a Web interface for all our computer systems allows us to distribute digital images from a pathology database to a potentially large group of users distributed in multiple locations throughout a large medical center.
Moseley, Anne M; Sherrington, Catherine; Elkins, Mark R; Herbert, Robert D; Maher, Christopher G
2009-09-01
To compare the comprehensiveness of indexing the reports of randomised controlled trials of physiotherapy interventions by eight bibliographic databases (AMED, CENTRAL, CINAHL, EMBASE, Hooked on Evidence, PEDro, PsycINFO and PubMed). Audit of bibliographic databases. Two hundred and eighty-one reports of randomised controlled trials of physiotherapy interventions were identified by screening the reference lists of 30 relevant systematic reviews published in four consecutive issues of the Cochrane Database of Systematic Reviews (Issue 3, 2007 to Issue 2, 2008). AMED, CENTRAL, CINAHL, EMBASE, Hooked on Evidence, PEDro, PsycINFO and PubMed were used to search for the trial reports. The number of trial reports indexed in each database was calculated. PEDro indexed 99% of the trial reports, CENTRAL indexed 98%, PubMed indexed 91%, EMBASE indexed 82%, CINAHL indexed 61%, Hooked on Evidence indexed 40%, AMED indexed 36% and PsycINFO indexed 17%. Most trial reports (92%) were indexed on four or more of the databases. One trial report was indexed on a single database (PEDro). Of the eight bibliographic databases examined, PEDro and CENTRAL provide the most comprehensive indexing of reports of randomised trials of physiotherapy interventions.
Interfacing External Quantum Devices to a Universal Quantum Computer
Lagana, Antonio A.; Lohe, Max A.; von Smekal, Lorenz
2011-01-01
We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. PMID:22216276
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
Identity-Based Verifiably Encrypted Signatures without Random Oracles
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wu, Qianhong; Qin, Bo
Fair exchange protocol plays an important role in electronic commerce in the case of exchanging digital contracts. Verifiably encrypted signatures provide an optimistic solution to these scenarios with an off-line trusted third party. In this paper, we propose an identity-based verifiably encrypted signature scheme. The scheme is non-interactive to generate verifiably encrypted signatures and the resulting encrypted signature consists of only four group elements. Based on the computational Diffie-Hellman assumption, our scheme is proven secure without using random oracles. To the best of our knowledge, this is the first identity-based verifiably encrypted signature scheme provably secure in the standard model.
Quantitative Evaluation of 3 DBMS: ORACLE, SEED AND INGRES
NASA Technical Reports Server (NTRS)
Sylto, R.
1984-01-01
Characteristics required for NASA scientific data base management application are listed as well as performance testing objectives. Results obtained for the ORACLE, SEED, and INGRES packages are presented in charts. It is concluded that vendor packages can manage 130 megabytes of data at acceptable load and query rates. Performance tests varying data base designs and various data base management system parameters are valuable to applications for choosing packages and critical to designing effective data bases. An applications productivity increases with the use of data base management system because of enhanced capabilities such as a screen formatter, a reporter writer, and a data dictionary.
Automatic Generation of Test Oracles - From Pilot Studies to Application
NASA Technical Reports Server (NTRS)
Feather, Martin S.; Smith, Ben
1998-01-01
There is a trend towards the increased use of automation in V&V. Automation can yield savings in time and effort. For critical systems, where thorough V&V is required, these savings can be substantial. We describe a progression from pilot studies to development and use of V&V automation. We used pilot studies to ascertain opportunities for, and suitability of, automating various analyses whose results would contribute to V&V. These studies culminated in the development of an automatic generator of automated test oracles. This was then applied and extended in the course of testing an Al planning system that is a key component of an autonomous spacecraft.
Interfacing external quantum devices to a universal quantum computer.
Lagana, Antonio A; Lohe, Max A; von Smekal, Lorenz
2011-01-01
We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. © 2011 Lagana et al.
Sparse PCA with Oracle Property.
Gu, Quanquan; Wang, Zhaoran; Liu, Han
In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.
Sparse PCA with Oracle Property
Gu, Quanquan; Wang, Zhaoran; Liu, Han
2014-01-01
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971
Hutten, Helmut; Stiegmaier, Wolfgang; Rauchegger, Günter
2005-09-01
Modern life style requires new methods for individual lifelong learning, based on access at every time and from every place. This fundamental requirement is provided by the Internet. The Internet technology promises an increasing potential in the future for e-learning or tele-learning. Some special requirements are password-controlled access, applicability of most commercially available PCs and laptops equipped with standard software (Microsoft Internet Explorer 6.0), central evaluation of the students' performance, inclusion of an examination part, provision of a picture gallery and a comprehensive glossary accessible in the learning mode. The KISS-shell has been developed based on the Oracle 10g application server in combination with a relational data base (Oracle 8i) on the server side and a web browser based interface using JavaScript for user control of data input on the client side (Kontrolliertes Intelligentes Selbstgesteuertes Studium, KISS). The first tutorial application has been realized with a chapter about cardiac pacemakers. The weight of that chapter (or module) is about 2 ECTS (i.e. the equivalent of 30 working hours; European Credit Transfer System, ECTS). The internal structure of the chapter is organized in sequential mode. It consists of five main sections. Each of those five sections is subdivided into five subsections of comparable length. Progression from one subsection to the next is possible only after successfully passing through the respective examination. The whole learning programme with the pacemaker chapter has been evaluated by 10 students. The system will be presented together with first experiences including the evaluation results. Until now the program has not been used for training purposes.
The ATLAS PanDA Monitoring System and its Evolution
NASA Astrophysics Data System (ADS)
Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.
2011-12-01
The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.
UCMP and the Internet help hospital libraries share resources.
Dempsey, R; Weinstein, L
1999-07-01
The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years.
UCMP and the Internet help hospital libraries share resources.
Dempsey, R; Weinstein, L
1999-01-01
The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years. PMID:10427426
Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B; Almon, Richard R; DuBois, Debra C; Jusko, William J; Hoffman, Eric P
2004-01-01
Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp).
Accessing and distributing EMBL data using CORBA (common object request broker architecture).
Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P
2000-01-01
The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.
Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B.; Almon, Richard R.; DuBois, Debra C.; Jusko, William J.; Hoffman, Eric P.
2004-01-01
Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp). PMID:14681485
Accessing and distributing EMBL data using CORBA (common object request broker architecture)
Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip
2000-01-01
Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259
External access to ALICE controls conditions data
NASA Astrophysics Data System (ADS)
Jadlovský, J.; Jadlovská, A.; Sarnovský, J.; Jajčišin, Š.; Čopík, M.; Jadlovská, S.; Papcun, P.; Bielek, R.; Čerkala, J.; Kopčík, M.; Chochula, P.; Augustinus, A.
2014-06-01
ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple WINCCOA scripts, which retrieves and stores data in a form usable for further studies. This relatively simple procedure generates a lot of administrative overhead - users have to request the data, experts needed to run the script, the results have to be exported outside of the experiment network. The new mechanism profits from database replica, which is running on the CERN campus network. Access to this database is not restricted and there is no risk of generating a heavy load affecting the operation of the experiment. The developed tools presented in this paper allow for access to this data. The users can use web-based tools to generate the requests, consisting of the data identifiers and period of time of interest. The administrators maintain full control over the data - an authorization and authentication mechanism helps to assign privileges to selected users and restrict access to certain groups of data. Advanced caching mechanism allows the user to profit from the presence of already processed data sets. This feature significantly reduces the time required for debugging as the retrieval of raw data can last tens of minutes. A highly configurable client allows for information retrieval bypassing the interactive interface. This method is for example used by ALICE Offline to extract operational conditions after a run is completed. Last but not least, the software can be easily adopted to any underlying database structure and is therefore not limited to WINCCOA.
Michaleff, Zoe A; Costa, Leonardo O P; Moseley, Anne M; Maher, Christopher G; Elkins, Mark R; Herbert, Robert D; Sherrington, Catherine
2011-02-01
Many bibliographic databases index research studies evaluating the effects of health care interventions. One study has concluded that the Physiotherapy Evidence Database (PEDro) has the most complete indexing of reports of randomized controlled trials of physical therapy interventions, but the design of that study may have exaggerated estimates of the completeness of indexing by PEDro. The purpose of this study was to compare the completeness of indexing of reports of randomized controlled trials of physical therapy interventions by 8 bibliographic databases. This study was an audit of bibliographic databases. Prespecified criteria were used to identify 400 reports of randomized controlled trials from the reference lists of systematic reviews published in 2008 that evaluated physical therapy interventions. Eight databases (AMED, CENTRAL, CINAHL, EMBASE, Hooked on Evidence, PEDro, PsycINFO, and PubMed) were searched for each trial report. The proportion of the 400 trial reports indexed by each database was calculated. The proportions of the 400 trial reports indexed by the databases were as follows: CENTRAL, 95%; PEDro, 92%; PubMed, 89%; EMBASE, 88%; CINAHL, 53%; AMED, 50%; Hooked on Evidence, 45%; and PsycINFO, 6%. Almost all of the trial reports (99%) were found in at least 1 database, and 88% were indexed by 4 or more databases. Four trial reports were uniquely indexed by a single database only (2 in CENTRAL and 1 each in PEDro and PubMed). The results are only applicable to searching for English-language published reports of randomized controlled trials evaluating physical therapy interventions. The 4 most comprehensive databases of trial reports evaluating physical therapy interventions were CENTRAL, PEDro, PubMed, and EMBASE. Clinicians seeking quick answers to clinical questions could search any of these databases knowing that all are reasonably comprehensive. PEDro, unlike the other 3 most complete databases, is specific to physical therapy, so studies not relevant to physical therapy are less likely to be retrieved. Researchers could use CENTRAL, PEDro, PubMed, and EMBASE in combination to conduct exhaustive searches for randomized trials in physical therapy.
IAC - INTEGRATED ANALYSIS CAPABILITY
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an engineering analysis program which contains a general purpose library of over 600 subroutines for numerical analysis. 5) Graphics - The graphics package IPLOT is included in IAC. IPLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc. Either DI3000 or PLOT-10 graphics software is required for full graphic capability. In addition to these analysis tools, IAC 2.5 contains an IGES interface which allows the user to read arbitrary IGES files into an IAC database and to edit and output new IGES files. IAC is available by license for a period of 10 years to approved U.S. licensees. The licensed program product includes one set of supporting documentation. Additional copies may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The program is structured to allow users to easily delete those program capabilities and "how to" examples they do not want in order to reduce the size of the package. The basic central memory requirement for IAC is approximately 750KB. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. The development of level 2.5 of IAC was completed in 1989.
Databases in the Central Government : State-of-the-art and the Future
NASA Astrophysics Data System (ADS)
Ohashi, Tomohiro
Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.
Trocky, NM; Fontinha, M
2005-01-01
Data collected throughout the course of a clinical research trial must be reviewed for accuracy and completeness continually. The Oracle Clinical® (OC) data management application utilized to capture clinical data facilitates data integrity through pre-programmed validations, edit and range checks, and discrepancy management modules. These functions were not enough. Coupled with the use of specially created reports in Oracle Discoverer® and Integrated Review TM, both ad-hoc query and reporting tools, research staff have enhanced their ability to clean, analyze and report more accurate data captured within and among Case Report Forms (eCRFs) by individual study or across multiple studies. PMID:16779428
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Ostrove, Corey I.
2017-01-01
This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.
Gilbert, R E; Pike, K; Kenyon, S L; Tarnow-Mordi, W; Taylor, D J
2005-06-01
We analysed the type of bacteraemia before discharge from Neonatal Intensive Care Units in babies born to women randomised to the MRC ORACLE Trials. There was no evidence for an effect of oral antibiotics given prior to delivery on bacteraemia due to gram negative or enterococci bacteria, but Group B streptococcal (GBS) bacteraemia was significantly reduced in women with preterm prelabour rupture of the membranes (1.58% to 0.55%, relative risk 0.34; 95% CI: 0.17-0.70). There was no detectable effect in women in spontaneous preterm labour with intact membranes as the risk of GBS bacteraemia in their babies was very small regardless of treatment.
2005-09-01
e.g. the transformation of a fragment to an instructional fragment. "* IMAT Database: A Jasmine ® database is used as central database in IMAT for the...storage of fragments. This is an object-oriented relational database. Jasmine ® was, amongst other factors, chosen for its ability to handle multimedia...to the Jasmine ® database, which is used in IMAT as central database. 3.1.1.1 Ontologies In IMAT, the proposed solution on problems with information
2017-01-01
We selected iOS in this study as the App operation system, Objective-C as the programming language, and Oracle as the database to develop an App to inspect controlled substances in patient care units. Using a web-enabled smartphone, pharmacist inspection can be performed on site and the inspection result can be directly recorded into HIS through the Internet, so human error of data translation can be minimized and the work efficiency and data processing can be improved. This system not only is fast and convenient compared to the conventional paperwork, but also provides data security and accuracy. In addition, there are several features to increase inspecting quality: (1) accuracy of drug appearance, (2) foolproof mechanism to avoid input errors or miss, (3) automatic data conversion without human judgments, (4) online alarm of expiry date, and (5) instant inspection result to show not meted items. This study has successfully turned paper-based medication inspection into inspection using a web-based mobile device. PMID:28286761
Lu, Ying-Hao; Lee, Li-Yao; Chen, Ying-Lan; Cheng, Hsing-I; Tsai, Wen-Tsung; Kuo, Chen-Chun; Chen, Chung-Yu; Huang, Yaw-Bin
2017-01-01
We selected iOS in this study as the App operation system, Objective-C as the programming language, and Oracle as the database to develop an App to inspect controlled substances in patient care units. Using a web-enabled smartphone, pharmacist inspection can be performed on site and the inspection result can be directly recorded into HIS through the Internet, so human error of data translation can be minimized and the work efficiency and data processing can be improved. This system not only is fast and convenient compared to the conventional paperwork, but also provides data security and accuracy. In addition, there are several features to increase inspecting quality: (1) accuracy of drug appearance, (2) foolproof mechanism to avoid input errors or miss, (3) automatic data conversion without human judgments, (4) online alarm of expiry date, and (5) instant inspection result to show not meted items. This study has successfully turned paper-based medication inspection into inspection using a web-based mobile device.
Utilizing ORACLE tools within Unix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, R.
1995-07-01
Large databases, by their very nature, often serve as repositories of data which may be needed by other systems. The transmission of this data to other systems has in the past involved several layers of human intervention. The Integrated Cargo Data Base (ICDB) developed by Martin Marietta Energy Systems for the Military Traffic Management Command as part of the Worldwide Port System provides data integration and worldwide tracking of cargo that passes through common-user ocean cargo ports. One of the key functions of ICDB is data distribution of a variety of data files to a number of other systems. Developmentmore » of automated data distribution procedures had to deal with the following constraints: (1) variable generation time for data files, (2) use of only current data for data files, (3) use of a minimum number of select statements, (4) creation of unique data files for multiple recipients, (5) automatic transmission of data files to recipients, and (6) avoidance of extensive and long-term data storage.« less
Monitoring of computing resource utilization of the ATLAS experiment
NASA Astrophysics Data System (ADS)
Rousseau, David; Dimitrov, Gancho; Vukotic, Ilija; Aidel, Osman; Schaffer, Rd; Albrand, Solveig
2012-12-01
Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.
Lee, Tae-Kyong; Chung, Hea-Jung; Park, Hye-Kyung; Lee, Eun-Ju; Nam, Hye-Seon; Jung, Soon-Im; Cho, Jee-Ye; Lee, Jin-Hee; Kim, Gon; Kim, Min-Chan
2008-01-01
A diet habit, which is developed in childhood, lasts for a life time. In this sense, nutrition education and early exposure to healthy menus in childhood is important. Children these days have easy access to the internet. Thus, a web-based nutrition education program for children is an effective tool for nutrition education of children. This site provides the material of the nutrition education for children with characters which are personified nutrients. The 151 menus are stored in the site together with video script of the cooking process. The menus are classified by the criteria based on age, menu type and the ethnic origin of the menu. The site provides a search function. There are three kinds of search conditions which are key words, menu type and "between" expression of nutrients such as calorie and other nutrients. The site is developed with the operating system Windows 2003 Server, the web server ZEUS 5, development language JSP, and database management system Oracle 10 g. PMID:20126375
STATUS OF VARIOUS SNS DIAGNOSTIC SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blokland, Willem; Purcell, J David; Patton, Jeff
2007-01-01
The Spallation Neutron Source (SNS) accelerator systems are ramping up to deliver a 1.0 GeV, 1.4 MW proton beam to a liquid mercury target for neutron scattering research. Enhancements or additions have been made to several instrument systems to support the ramp up in intensity, improve reliability, and/or add functionality. The Beam Current Monitors now support increased rep rates, the Harp system now includes charge density calculations for the target, and a new system has been created to collect data for the beam accounting and present the data over the web and to the operator consoles. The majority of themore » SNS beam instruments are PC-based and their configuration files are now managed through the Oracle relational database. A new version for the wire scanner software was developed to add features to correlate the scan with beam loss, parking in the beam, and measuring the longitudinal beam current. This software is currently being tested. This paper also includes data from the selected instruments.« less
Granitto, Matthew; Bailey, Elizabeth A.; Schmidt, Jeanine M.; Shew, Nora B.; Gamble, Bruce M.; Labay, Keith A.
2011-01-01
The Alaska Geochemical Database (AGDB) was created and designed to compile and integrate geochemical data from Alaska in order to facilitate geologic mapping, petrologic studies, mineral resource assessments, definition of geochemical baseline values and statistics, environmental impact assessments, and studies in medical geology. This Microsoft Access database serves as a data archive in support of present and future Alaskan geologic and geochemical projects, and contains data tables describing historical and new quantitative and qualitative geochemical analyses. The analytical results were determined by 85 laboratory and field analytical methods on 264,095 rock, sediment, soil, mineral and heavy-mineral concentrate samples. Most samples were collected by U.S. Geological Survey (USGS) personnel and analyzed in USGS laboratories or, under contracts, in commercial analytical laboratories. These data represent analyses of samples collected as part of various USGS programs and projects from 1962 to 2009. In addition, mineralogical data from 18,138 nonmagnetic heavy mineral concentrate samples are included in this database. The AGDB includes historical geochemical data originally archived in the USGS Rock Analysis Storage System (RASS) database, used from the mid-1960s through the late 1980s and the USGS PLUTO database used from the mid-1970s through the mid-1990s. All of these data are currently maintained in the Oracle-based National Geochemical Database (NGDB). Retrievals from the NGDB were used to generate most of the AGDB data set. These data were checked for accuracy regarding sample location, sample media type, and analytical methods used. This arduous process of reviewing, verifying and, where necessary, editing all USGS geochemical data resulted in a significantly improved Alaska geochemical dataset. USGS data that were not previously in the NGDB because the data predate the earliest USGS geochemical databases, or were once excluded for programmatic reasons, are included here in the AGDB and will be added to the NGDB. The AGDB data provided here are the most accurate and complete to date, and should be useful for a wide variety of geochemical studies. The AGDB data provided in the linked database may be updated or changed periodically. The data on the DVD and in the data downloads provided with this report are current as of date of publication.
Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.
Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar
2017-03-01
We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardan, R; Popple, R; Smith, H
Purpose: To elucidate realistic clinical treatment planning workload and timelines to improve understanding for patients, payers, and other institutions involved in radiotherapy processes. Methods: A web based tool was developed using Oracle Express (Oracle Corp, Redwood City, CA) which allowed communication between the physicians and staff about the current state of the patient plan. For 6 years, all patient courses were logged and time-stamped in 22 discreet steps which detailed start and stop times for simulation, contouring, and treatment planning tasks. This data was combined with the treatment planning database (TPDB) using the Eclipse Scripting API (Varian Medical Systems, Palomore » Alto, CA) to cross-identify plans between the two systems. This time data was analyzed across our dosimetry staff and treatment modality. Results: In 6 years, 110,477 patient statuses were time-logged for 9683 courses of treatment using our internal software. The courses contained 8305 unique patients who were binned into one of 11 diagnosis site categories. 8253 courses could be reconciled against the TPDB using timestamp data from patient statuses. The average planning volume per dosimetrist was 375.8 ± 142.4 plans per year with the average number of planning revisions per dosimetrist of 71.0 ± 27.1 plans per year. The median treatment planning times by modality ranged from to 48.3 hours for IMRT plans 5 fields or less to 119.6 hours for IMRT with 8 or more fields. Two arc VMAT, three arc VMAT, and 3D plans median times were 89.1 hours, 113.8 hours, and 50.9 hours respectively. Conclusion: Using our web based tool, we have demonstrated the ability to quantify treatment planning timelines and workloads which could help in setting appropriate expectations for patients, payers, and hospital administration. COI: Author received monies from Varian Medical Systems for research and teaching honorarium.« less
Jones, David R; Pike, Katie; Kenyon, Sara; Pike, Laura; Henderson, Brian; Brocklehurst, Peter; Marlow, Neil; Salt, Alison; Taylor, David J
2011-01-01
Statutory educational attainment measures are rarely used as health study outcomes, but Key Stage 1 (KS1) data formed secondary outcomes in the long-term follow-up to age 7 years of the ORACLE II trial of antibiotic use in preterm babies. This paper describes the approach, compares different approaches to analysis of the KS1 data and compares use of summary KS1 (level) data with use of individual question scores. 3394 children born to women in the ORACLE Children Study and resident in England at age 7. Analysis of educational achievement measured by national end of KS1 data (KS1) using Poisson regression modelling and anchoring of the KS1 data using external standards. KS1 summary level data were obtained for 3239 (95%) eligible children; raw individual question scores were obtained for 1899 (54%). Use of individual question scores where available did not change the conclusion of no evidence of treatment effects based on summary KS1 outcome data. When accessible for medical research purposes, routinely collected educational outcome data may have advantages of low cost and standardised definition. Here, summary scores lead to similar conclusions to raw (individual question) scores and so are attractive and cost-effective alternatives.
NASA Astrophysics Data System (ADS)
Piccardi, Luigi
2000-07-01
Historical data are fundamental to the understanding of the seismic history of an area. At the same time, knowledge of the active tectonic processes allows us to understand how earthquakes have been perceived by past cultures. Delphi is one of the principal archaeological sites of Greece, the main oracle of Apollo. It was by far the most venerated oracle of the Greek ancient world. According to tradition, the mantic proprieties of the oracle were obtained from an open chasm in the earth. Delphi is directly above one of the main antithetic active faults of the Gulf of Corinth Rift, which bounds Mount Parnassus to the south. The geometry of the fault and slip-parallel lineations on the main fault plane indicate normal movement, with minor right-lateral slip component. Combining tectonic data, archaeological evidence, historical sources, and a reexamination of myths, it appears that the Helice earthquake of 373 B.C. ruptured not only the master fault of the Gulf of Corinth Rift at Helice, but also the antithetic fault at Delphi, similarly to the Corinth earthquake of 1981. Moreover, the presence of an active fault directly below the temples of the oldest sanctuary suggests that the mythological oracular chasm might well have been an ancient tectonic surface rupture.
48 CFR 52.232-33 - Payment by Electronic Funds Transfer-Central Contractor Registration.
Code of Federal Regulations, 2010 CFR
2010-10-01
... contained in the Central Contractor Registration (CCR) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the CCR database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the CCR database is incorrect...
Kenyon, S; Pike, K; Jones, D R; Brocklehurst, P; Marlow, N; Salt, A; Taylor, D J
2008-10-11
The ORACLE I trial compared the use of erythromycin and/or amoxicillin-clavulanate (co-amoxiclav) with that of placebo for women with preterm rupture of the membranes without overt signs of clinical infection, by use of a factorial randomised design. The aim of the present study--the ORACLE Children Study I--was to determine the long-term effects on children of these interventions. We assessed children at age 7 years born to the 4148 women who had completed the ORACLE I trial and who were eligible for follow-up with a structured parental questionnaire to assess the child's health status. Functional impairment was defined as the presence of any level of functional impairment (severe, moderate, or mild) derived from the mark III Multi-Attribute Health Status classification system. Educational outcomes were assessed with national curriculum test results for children resident in England. Outcome was determined for 3298 (75%) eligible children. There was no difference in the proportion of children with any functional impairment after prescription of erythromycin, with or without co-amoxiclav, compared with those born to mothers who received no erythromycin (594 [38.3%] of 1551 children vs 655 [40.4%] of 1620; odds ratio 0.91, 95% CI 0.79-1.05) or after prescription of co-amoxiclav, with or without erythromycin, compared with those born to mothers who received no co-amoxiclav (645 [40.6%] of 1587 vs 604 [38.1%] of 1584; 1.11, 0.96-1.28). Neither antibiotic had a significant effect on the overall level of behavioural difficulties experienced, on specific medical conditions, or on the proportions of children achieving each level in reading, writing, or mathematics at key stage one. The prescription of antibiotics for women with preterm rupture of the membranes seems to have little effect on the health of children at 7 years of age. UK Medical Research Council.
SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less
Data base management system analysis and performance testing with respect to NASA requirements
NASA Technical Reports Server (NTRS)
Martin, E. A.; Sylto, R. V.; Gough, T. L.; Huston, H. A.; Morone, J. J.
1981-01-01
Several candidate Data Base Management Systems (DBM's) that could support the NASA End-to-End Data System's Integrated Data Base Management System (IDBMS) Project, later rescoped and renamed the Packet Management System (PMS) were evaluated. The candidate DBMS systems which had to run on the Digital Equipment Corporation VAX 11/780 computer system were ORACLE, SEED and RIM. Oracle and RIM are both based on the relational data base model while SEED employs a CODASYL network approach. A single data base application which managed stratospheric temperature profiles was studied. The primary reasons for using this application were an insufficient volume of available PMS-like data, a mandate to use actual rather than simulated data, and the abundance of available temperature profile data.
BioMart Central Portal: an open database network for the biological community.
Guberman, Jonathan M; Ai, J; Arnaiz, O; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J; Di Génova, A; Forbes, Simon; Fujisawa, T; Gadaleta, E; Goodstein, D M; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D T; Wong-Erasmus, Marie; Yao, L; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities.
NASA Astrophysics Data System (ADS)
Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.
2015-12-01
The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.
Chen, Chuyun; Hong, Jiaming; Zhou, Weilin; Lin, Guohua; Wang, Zhengfei; Zhang, Qufei; Lu, Cuina; Lu, Lihong
2017-07-12
To construct a knowledge platform of acupuncture ancient books based on data mining technology, and to provide retrieval service for users. The Oracle 10 g database was applied and JAVA was selected as development language; based on the standard library and ancient books database established by manual entry, a variety of data mining technologies, including word segmentation, speech tagging, dependency analysis, rule extraction, similarity calculation, ambiguity analysis, supervised classification technology were applied to achieve text automatic extraction of ancient books; in the last, through association mining and decision analysis, the comprehensive and intelligent analysis of disease and symptom, meridians, acupoints, rules of acupuncture and moxibustion in acupuncture ancient books were realized, and retrieval service was provided for users through structure of browser/server (B/S). The platform realized full-text retrieval, word frequency analysis and association analysis; when diseases or acupoints were searched, the frequencies of meridian, acupoints (diseases) and techniques were presented from high to low, meanwhile the support degree and confidence coefficient between disease and acupoints (special acupoint), acupoints and acupoints in prescription, disease or acupoints and technique were presented. The experience platform of acupuncture ancient books based on data mining technology could be used as a reference for selection of disease, meridian and acupoint in clinical treatment and education of acupuncture and moxibustion.
A strong-motion database from the Central American subduction zone
NASA Astrophysics Data System (ADS)
Arango, Maria Cristina; Strasser, Fleur O.; Bommer, Julian J.; Hernández, Douglas A.; Cepeda, Jose M.
2011-04-01
Subduction earthquakes along the Pacific Coast of Central America generate considerable seismic risk in the region. The quantification of the hazard due to these events requires the development of appropriate ground-motion prediction equations, for which purpose a database of recordings from subduction events in the region is indispensable. This paper describes the compilation of a comprehensive database of strong ground-motion recordings obtained during subduction-zone events in Central America, focusing on the region from 8 to 14° N and 83 to 92° W, including Guatemala, El Salvador, Nicaragua and Costa Rica. More than 400 accelerograms recorded by the networks operating across Central America during the last decades have been added to data collected by NORSAR in two regional projects for the reduction of natural disasters. The final database consists of 554 triaxial ground-motion recordings from events of moment magnitudes between 5.0 and 7.7, including 22 interface and 58 intraslab-type events for the time period 1976-2006. Although the database presented in this study is not sufficiently complete in terms of magnitude-distance distribution to serve as a basis for the derivation of predictive equations for interface and intraslab events in Central America, it considerably expands the Central American subduction data compiled in previous studies and used in early ground-motion modelling studies for subduction events in this region. Additionally, the compiled database will allow the assessment of the existing predictive models for subduction-type events in terms of their applicability for the Central American region, which is essential for an adequate estimation of the hazard due to subduction earthquakes in this region.
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Aagaard, Thomas; Lund, Hans; Juhl, Carsten
2016-11-22
When conducting systematic reviews, it is essential to perform a comprehensive literature search to identify all published studies relevant to the specific research question. The Cochrane Collaborations Methodological Expectations of Cochrane Intervention Reviews (MECIR) guidelines state that searching MEDLINE, EMBASE and CENTRAL should be considered mandatory. The aim of this study was to evaluate the MECIR recommendations to use MEDLINE, EMBASE and CENTRAL combined, and examine the yield of using these to find randomized controlled trials (RCTs) within the area of musculoskeletal disorders. Data sources were systematic reviews published by the Cochrane Musculoskeletal Review Group, including at least five RCTs, reporting a search history, searching MEDLINE, EMBASE, CENTRAL, and adding reference- and hand-searching. Additional databases were deemed eligible if they indexed RCTs, were in English and used in more than three of the systematic reviews. Relative recall was calculated as the number of studies identified by the literature search divided by the number of eligible studies i.e. included studies in the individual systematic reviews. Finally, cumulative median recall was calculated for MEDLINE, EMBASE and CENTRAL combined followed by the databases yielding additional studies. Deemed eligible was twenty-three systematic reviews and the databases included other than MEDLINE, EMBASE and CENTRAL was AMED, CINAHL, HealthSTAR, MANTIS, OT-Seeker, PEDro, PsychINFO, SCOPUS, SportDISCUS and Web of Science. Cumulative median recall for combined searching in MEDLINE, EMBASE and CENTRAL was 88.9% and increased to 90.9% when adding 10 additional databases. Searching MEDLINE, EMBASE and CENTRAL was not sufficient for identifying all effect studies on musculoskeletal disorders, but additional ten databases did only increase the median recall by 2%. It is possible that searching databases is not sufficient to identify all relevant references, and that reviewers must rely upon additional sources in their literature search. However further research is needed.
Production of biofuels and biochemicals: in need of an ORACLE.
Miskovic, Ljubisa; Hatzimanikatis, Vassily
2010-08-01
The engineering of cells for the production of fuels and chemicals involves simultaneous optimization of multiple objectives, such as specific productivity, extended substrate range and improved tolerance - all under a great degree of uncertainty. The achievement of these objectives under physiological and process constraints will be impossible without the use of mathematical modeling. However, the limited information and the uncertainty in the available information require new methods for modeling and simulation that will characterize the uncertainty and will quantify, in a statistical sense, the expectations of success of alternative metabolic engineering strategies. We discuss these considerations toward developing a framework for the Optimization and Risk Analysis of Complex Living Entities (ORACLE) - a computational method that integrates available information into a mathematical structure to calculate control coefficients. Copyright 2010 Elsevier Ltd. All rights reserved.
False Discovery Control in Large-Scale Spatial Multiple Testing
Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin
2014-01-01
Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138
Internet-based information system of digital geological data providing
NASA Astrophysics Data System (ADS)
Yuon, Egor; Soukhanov, Mikhail; Markov, Kirill
2015-04-01
One of the Russian Federal аgency of mineral resources problems is to provide the geological information which was delivered during the field operation for the means of federal budget. This information should be present in the current, conditional form. Before, the leading way of presenting geological information were paper geological maps, slices, borehole diagrams reports etc. Technologies of database construction, including distributed databases, technologies of construction of distributed information-analytical systems and Internet-technologies are intensively developing nowadays. Most of geological organizations create their own information systems without any possibility of integration into other systems of the same orientation. In 2012, specialists of VNIIgeosystem together with specialists of VSEGEI started the large project - creating the system of providing digital geological materials with using modern and perspective internet-technologies. The system is based on the web-server and the set of special programs, which allows users to efficiently get rasterized and vectorised geological materials. These materials are: geological maps of scale 1:1M, geological maps of scale 1:200 000 and 1:2 500 000, the fragments of seamless geological 1:1M maps, structural zoning maps inside the seamless fragments, the legends for State geological maps 1:200 000 and 1:1 000 000, full author's set of maps and also current materials for international projects «Atlas of geological maps for Circumpolar Arctic scale 1:5 000 000» and «Atlas of Geologic maps of central Asia and adjacent areas scale 1:2 500 000». The most interesting and functional block of the system - is the block of providing structured and well-formalized geological vector materials, based on Gosgeolkart database (NGKIS), managed by Oracle and the Internet-access is supported by web-subsystem NGKIS, which is currently based on MGS-Framework platform, developed by VNIIgeosystem. One of the leading elements is the web-service, which realizes the interaction of all parts of the system and controls whole the way of the request from the user to the database and back, adopted to the GeoSciML and EarthResourceML view. The experience of creation the Internet-based information system of digital geological data providing, and also previous works, including the developing of web-service of NGKIS-system, allows to tell, that technological realization of presenting Russian geological-cartographical data with using of international standards is possible. While realizing, it could be some difficulties, associated with geological material depth. Russian informational geological model is more deep and wide, than foreign. This means the main problem of using international standards and formats: Russian geological data presentation is possible only with decreasing the data detalisation. But, such a problem becomes not very important, if the service publishes also Russian vocabularies, not associated with international vocabularies. In this case, the international format could be the interchange format to change data between Russian users. The integration into the international projects reaches developing of the correlation schemes between Russian and foreign classificators and vocabularies.
Visualization of historical data for the ATLAS detector controls - DDV
NASA Astrophysics Data System (ADS)
Maciejewski, J.; Schlenker, S.
2017-10-01
The ATLAS experiment is one of four detectors located on the Large Hardon Collider (LHC) based at CERN. Its detector control system (DCS) stores the slow control data acquired within the back-end of distributed WinCC OA applications, which enables the data to be retrieved for future analysis, debugging and detector development in an Oracle relational database. The ATLAS DCS Data Viewer (DDV) is a client-server application providing access to the historical data outside of the experiment network. The server builds optimized SQL queries, retrieves the data from the database and serves it to the clients via HTTP connections. The server also implements protection methods to prevent malicious use of the database. The client is an AJAX-type web application based on the Vaadin (framework build around the Google Web Toolkit (GWT)) which gives users the possibility to access the data with ease. The DCS metadata can be selected using a column-tree navigation or a search engine supporting regular expressions. The data is visualized by a selection of output modules such as a java script value-over time plots or a lazy loading table widget. Additional plugins give the users the possibility to retrieve the data in ROOT format or as an ASCII file. Control system alarms can also be visualized in a dedicated table if necessary. Python mock-up scripts can be generated by the client, allowing the user to query the pythonic DDV server directly, such that the users can embed the scripts into more complex analysis programs. Users are also able to store searches and output configurations as XML on the server to share with others via URL or to embed in HTML.
Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT
NASA Astrophysics Data System (ADS)
Livschitz, Yakov; Munro, Rosemary; Lang, Rüdiger; Fiedler, Lars; Dyer, Richard; Eisinger, Michael
2010-05-01
The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument's health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument's degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.
Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT
NASA Astrophysics Data System (ADS)
Livschitz, Y.; Munro, R.; Lang, R.; Fiedler, L.; Dyer, R.; Eisinger, M.
2009-12-01
The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument’s health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument’s degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.
75 FR 60415 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... computer systems and networks. This information collection is required to obtain the necessary data... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible...
Flower, Andrew; Lewith, George T; Little, Paul
2007-11-01
For most complementary and alternative medicine interventions, the absence of a high-quality evidence base to define good practice presents a serious problem for clinicians, educators, and researchers. The Delphi process may offer a pragmatic way to establish good practice guidelines until more rigorous forms of assessment can be undertaken. To use a modified Delphi to develop good practice guidelines for a feasibility study exploring the role of Chinese herbal medicine (CHM) in the treatment of endometriosis. To compare the outcomes from Delphi with data derived from a systematic review of the Chinese language database. An expert group was convened for a three-round Delphi that initially produced key statements relating to the CHM diagnosis and treatment of endometriosis (round 1) and then anonymously rated these on a 1-7 Likert scale (rounds 2 and 3). Statements with a median score of 5 and above were regarded as demonstrating positive group consensus. The differential diagnoses within Chinese Medicine and rating of the clinical value of individual herbs were then contrasted with comparable data from a review of Chinese language reports in the Chinese Biomedical Retrieval System (1978-2002), and China Academy of Traditional Chinese Medicine (1985-2002) databases and the Chinese TCM and magazine literature (1984-2004) databases. Consensus (good practice) guidelines for the CHM treatment of endometriosis relating to common diagnostic patterns, herb selection, dosage, and patient management were produced. The Delphi guidelines demonstrated a high degree of congruence with the information from the Chinese language databases. In the absence of rigorous evidence, Delphi offers a way to synthesize expert knowledge relating to diagnosis, patient management, and herbal selection in the treatment of endometriosis. The limitations of the expert group and the inability of Delphi to capture the subtle nuances of individualized clinical decision-making limit the usefulness of this approach.
Introducing the Forensic Research/Reference on Genetics knowledge base, FROG-kb.
Rajeevan, Haseena; Soundararajan, Usha; Pakstis, Andrew J; Kidd, Kenneth K
2012-09-01
Online tools and databases based on multi-allelic short tandem repeat polymorphisms (STRPs) are actively used in forensic teaching, research, and investigations. The Fst value of each CODIS marker tends to be low across the populations of the world and most populations typically have all the common STRP alleles present diminishing the ability of these systems to discriminate ethnicity. Recently, considerable research is being conducted on single nucleotide polymorphisms (SNPs) to be considered for human identification and description. However, online tools and databases that can be used for forensic research and investigation are limited. The back end DBMS (Database Management System) for FROG-kb is Oracle version 10. The front end is implemented with specific code using technologies such as Java, Java Servlet, JSP, JQuery, and GoogleCharts. We present an open access web application, FROG-kb (Forensic Research/Reference on Genetics-knowledge base, http://frog.med.yale.edu), that is useful for teaching and research relevant to forensics and can serve as a tool facilitating forensic practice. The underlying data for FROG-kb are provided by the already extensively used and referenced ALlele FREquency Database, ALFRED (http://alfred.med.yale.edu). In addition to displaying data in an organized manner, computational tools that use the underlying allele frequencies with user-provided data are implemented in FROG-kb. These tools are organized by the different published SNP/marker panels available. This web tool currently has implemented general functions possible for two types of SNP panels, individual identification and ancestry inference, and a prediction function specific to a phenotype informative panel for eye color. The current online version of FROG-kb already provides new and useful functionality. We expect FROG-kb to grow and expand in capabilities and welcome input from the forensic community in identifying datasets and functionalities that will be most helpful and useful. Thus, the structure and functionality of FROG-kb will be revised in an ongoing process of improvement. This paper describes the state as of early June 2012.
Global Tsunami Database: Adding Geologic Deposits, Proxies, and Tools
NASA Astrophysics Data System (ADS)
Brocko, V. R.; Varner, J.
2007-12-01
A result of collaboration between NOAA's National Geophysical Data Center (NGDC) and the Cooperative Institute for Research in the Environmental Sciences (CIRES), the Global Tsunami Database includes instrumental records, human observations, and now, information inferred from the geologic record. Deep Ocean Assessment and Reporting of Tsunamis (DART) data, historical reports, and information gleaned from published tsunami deposit research build a multi-faceted view of tsunami hazards and their history around the world. Tsunami history provides clues to what might happen in the future, including frequency of occurrence and maximum wave heights. However, instrumental and written records commonly span too little time to reveal the full range of a region's tsunami hazard. The sedimentary deposits of tsunamis, identified with the aid of modern analogs, increasingly complement instrumental and human observations. By adding the component of tsunamis inferred from the geologic record, the Global Tsunami Database extends the record of tsunamis backward in time. Deposit locations, their estimated age and descriptions of the deposits themselves fill in the tsunami record. Tsunamis inferred from proxies, such as evidence for coseismic subsidence, are included to estimate recurrence intervals, but are flagged to highlight the absence of a physical deposit. Authors may submit their own descriptions and upload digital versions of publications. Users may sort by any populated field, including event, location, region, age of deposit, author, publication type (extract information from peer reviewed publications only, if you wish), grain size, composition, presence/absence of plant material. Users may find tsunami deposit references for a given location, event or author; search for particular properties of tsunami deposits; and even identify potential collaborators. Users may also download public-domain documents. Data and information may be viewed using tools designed to extract and display data from the Oracle database (selection forms, Web Map Services, and Web Feature Services). In addition, the historic tsunami archive (along with related earthquakes and volcanic eruptions) is available in KML (Keyhole Markup Language) format for use with Google Earth and similar geo-viewers.
Andreozzi, Stefano; Chakrabarti, Anirikh; Soh, Keng Cher; Burgard, Anthony; Yang, Tae Hoon; Van Dien, Stephen; Miskovic, Ljubisa; Hatzimanikatis, Vassily
2016-05-01
Rational metabolic engineering methods are increasingly employed in designing the commercially viable processes for the production of chemicals relevant to pharmaceutical, biotechnology, and food and beverage industries. With the growing availability of omics data and of methodologies capable to integrate the available data into models, mathematical modeling and computational analysis are becoming important in designing recombinant cellular organisms and optimizing cell performance with respect to desired criteria. In this contribution, we used the computational framework ORACLE (Optimization and Risk Analysis of Complex Living Entities) to analyze the physiology of recombinant Escherichia coli producing 1,4-butanediol (BDO) and to identify potential strategies for improved production of BDO. The framework allowed us to integrate data across multiple levels and to construct a population of large-scale kinetic models despite the lack of available information about kinetic properties of every enzyme in the metabolic pathways. We analyzed these models and we found that the enzymes that primarily control the fluxes leading to BDO production are part of central glycolysis, the lower branch of tricarboxylic acid (TCA) cycle and the novel BDO production route. Interestingly, among the enzymes between the glucose uptake and the BDO pathway, the enzymes belonging to the lower branch of TCA cycle have been identified as the most important for improving BDO production and yield. We also quantified the effects of changes of the target enzymes on other intracellular states like energy charge, cofactor levels, redox state, cellular growth, and byproduct formation. Independent earlier experiments on this strain confirmed that the computationally obtained conclusions are consistent with the experimentally tested designs, and the findings of the present studies can provide guidance for future work on strain improvement. Overall, these studies demonstrate the potential and effectiveness of ORACLE for the accelerated design of microbial cell factories. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Group updates Gravity Database for central Andes
NASA Astrophysics Data System (ADS)
MIGRA Group; Götze, H.-J.
Between 1993 and 1995 a group of scientists from Chile, Argentina, and Germany incorporated some 2000 new gravity observations into a database that covers a remote region of the Central Andes in northern Chile and northwestern Argentina (between 64°-71°W and 20°-29°S). The database can be used to study the structure and evolution of the Andes. About 14,000 gravity values are included in the database, including older, reprocessed data. Researchers at universities or governmental agencies are welcome to use the data for noncommercial purposes.
ERIC Educational Resources Information Center
Proctor, Tony
1988-01-01
Explores the conceptual components of a computer program designed to enhance creative thinking and reviews software that aims to stimulate creative thinking. Discusses BRAIN and ORACLE, programs intended to aid in creative problem solving. (JOW)
Alchemy of the Oracle: The Delphi Technique.
ERIC Educational Resources Information Center
Wilhelm, William J.
2001-01-01
Discusses the origins and foundations of the Delphi technique. Outlines procedures for using it in research to obtain the insights of experts. Addresses limitations of the technique. (Contains 44 references.) (SK)
Data bases for forest inventory in the North-Central Region.
Jerold T. Hahn; Mark H. Hansen
1985-01-01
Describes the data collected by the Forest Inventory and Analysis (FIA) Research Work Unit at the North Central Forest Experiment Station. Explains how interested parties may obtain information from the databases either through direct access or by special requests to the FIA database manager.
75 FR 57437 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
... a Food Safety Education and Training Materials Database. The Database is a centralized gateway to... creating previously available education materials) (2) provide a central gateway to access the education materials (3) create a systematic and efficient method of collecting data from USDA grantees and (4) promote...
78 FR 69040 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-18
... a Food Safety Education and Training Materials Database. The Database is a centralized gateway to... creating previously available education materials), (2) provide a central gateway to access the education materials, (3) create a systematic and efficient method of collecting data from USDA grantees, and (4...
Multicenter breast cancer collaborative registry.
Sherman, Simon; Shats, Oleg; Fleissner, Elizabeth; Bascom, George; Yiee, Kevin; Copur, Mehmet; Crow, Kate; Rooney, James; Mateen, Zubeena; Ketcham, Marsha A; Feng, Jianmin; Sherman, Alexander; Gleason, Michael; Kinarsky, Leo; Silva-Lopez, Edibaldo; Edney, James; Reed, Elizabeth; Berger, Ann; Cowan, Kenneth
2011-01-01
The Breast Cancer Collaborative Registry (BCCR) is a multicenter web-based system that efficiently collects and manages a variety of data on breast cancer (BC) patients and BC survivors. This registry is designed as a multi-tier web application that utilizes Java Servlet/JSP technology and has an Oracle 11g database as a back-end. The BCCR questionnaire has accommodated standards accepted in breast cancer research and healthcare. By harmonizing the controlled vocabulary with the NCI Thesaurus (NCIt) or Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), the BCCR provides a standardized approach to data collection and reporting. The BCCR has been recently certified by the National Cancer Institute's Center for Biomedical Informatics and Information Technology (NCI CBIIT) as a cancer Biomedical Informatics Grid (caBIG(®)) Bronze Compatible product.The BCCR is aimed at facilitating rapid and uniform collection of critical information and biological samples to be used in developing diagnostic, prevention, treatment, and survivorship strategies against breast cancer. Currently, seven cancer institutions are participating in the BCCR that contains data on almost 900 subjects (BC patients and survivors, as well as individuals at high risk of getting BC).
Multicenter Breast Cancer Collaborative Registry
Sherman, Simon; Shats, Oleg; Fleissner, Elizabeth; Bascom, George; Yiee, Kevin; Copur, Mehmet; Crow, Kate; Rooney, James; Mateen, Zubeena; Ketcham, Marsha A.; Feng, Jianmin; Sherman, Alexander; Gleason, Michael; Kinarsky, Leo; Silva-Lopez, Edibaldo; Edney, James; Reed, Elizabeth; Berger, Ann; Cowan, Kenneth
2011-01-01
The Breast Cancer Collaborative Registry (BCCR) is a multicenter web-based system that efficiently collects and manages a variety of data on breast cancer (BC) patients and BC survivors. This registry is designed as a multi-tier web application that utilizes Java Servlet/JSP technology and has an Oracle 11g database as a back-end. The BCCR questionnaire has accommodated standards accepted in breast cancer research and healthcare. By harmonizing the controlled vocabulary with the NCI Thesaurus (NCIt) or Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), the BCCR provides a standardized approach to data collection and reporting. The BCCR has been recently certified by the National Cancer Institute’s Center for Biomedical Informatics and Information Technology (NCI CBIIT) as a cancer Biomedical Informatics Grid (caBIG®) Bronze Compatible product. The BCCR is aimed at facilitating rapid and uniform collection of critical information and biological samples to be used in developing diagnostic, prevention, treatment, and survivorship strategies against breast cancer. Currently, seven cancer institutions are participating in the BCCR that contains data on almost 900 subjects (BC patients and survivors, as well as individuals at high risk of getting BC). PMID:21918596
AnClim and ProClimDB software for data quality control and homogenization of time series
NASA Astrophysics Data System (ADS)
Stepanek, Petr
2015-04-01
During the last decade, a software package consisting of AnClim, ProClimDB and LoadData for processing (mainly climatological) data has been created. This software offers a complex solution for processing of climatological time series, starting from loading the data from a central database (e.g. Oracle, software LoadData), through data duality control and homogenization to time series analysis, extreme value evaluations and RCM outputs verification and correction (ProClimDB and AnClim software). The detection of inhomogeneities is carried out on a monthly scale through the application of AnClim, or newly by R functions called from ProClimDB, while quality control, the preparation of reference series and the correction of found breaks is carried out by the ProClimDB software. The software combines many statistical tests, types of reference series and time scales (monthly, seasonal and annual, daily and sub-daily ones). These can be used to create an "ensemble" of solutions, which may be more reliable than any single method. AnClim software is suitable for educational purposes: e.g. for students getting acquainted with methods used in climatology. Built-in graphical tools and comparison of various statistical tests help in better understanding of a given method. ProClimDB is, on the contrary, tool aimed for processing of large climatological datasets. Recently, functions from R may be used within the software making it more efficient in data processing and capable of easy inclusion of new methods (when available under R). An example of usage is easy comparison of methods for correction of inhomogeneities in daily data (HOM of Paul Della-Marta, SPLIDHOM method of Olivier Mestre, DAP - own method, QM of Xiaolan Wang and others). The software is available together with further information on www.climahom.eu . Acknowledgement: this work was partially funded by the project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.
ERIC Educational Resources Information Center
Simon, Brian
1980-01-01
Presents some of the findings of the ORACLE research program (Observational Research and Classroom Learning Evaluation), a detailed observational study of teacher-student interaction, teaching styles, and management methods within a sample of primary classrooms. (Editor/SJL)
AirMSPI ORACLES Terrain Data V006
Atmospheric Science Data Center
2018-05-05
... ER-2 Instrument: AirMSPI Spatial Coverage: United States, California, Georgia, Africa, Southern Africa, ... 10/25 meters per pixel Temporal Coverage: 07/28/2016 - 10/06/2016 Temporal Resolution: ...
Abugessaisa, Imad; Gomez-Cabrero, David; Snir, Omri; Lindblad, Staffan; Klareskog, Lars; Malmström, Vivianne; Tegnér, Jesper
2013-04-02
Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort.
2013-01-01
Background Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Methods Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. Results We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Conclusions Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort. PMID:23548156
Model-Observation Comparisons of Biomass Burning Smoke and Clouds Over the Southeast Atlantic Ocean
NASA Astrophysics Data System (ADS)
Doherty, S. J.; Saide, P.; Zuidema, P.; Shinozuka, Y.; daSilva, A.; McFarquhar, G. M.; Pfister, L.; Carmichael, G. R.; Ferrada, G. A.; Howell, S. G.; Freitag, S.; Dobracki, A. N.; Smirnow, N.; Longo, K.; LeBlanc, S. E.; Adebiyi, A. A.; Podolske, J. R.; Small Griswold, J. D.; Hekkila, A.; Ueyama, R.; Wood, R.; Redemann, J.
2017-12-01
From August through October, in the SE Atlantic a plume of biomass burning smoke from central Africa overlays a relatively persistent stratocumulus-to-cumulus cloud deck. These smoke aerosols are believed to have significant climate forcing via aerosol-radiation and aerosol-cloud interactions, though both the magnitude and sign of this forcing is highly uncertain. This is due to large model spread in simulated aerosol and cloud properties and, until now, a sparsity of observations to constrain the models. Here we will present a comparison of both aerosol and cloud properties over the region using data from the first deployment of the NASA ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) field experiment (August-September 2016). We examine both horizontal and geographic variations in a range of aerosol and cloud properties and their position relative to each other, since the degree to which aerosols and clouds coincide both horizontally and vertically is perhaps the greatest source of uncertainty in their climate forcing.
Active learning for noisy oracle via density power divergence.
Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi
2013-10-01
The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.
Khanna, Ritu; Kline, Lewis R
2003-07-01
To compare efficacy, compliance rates, and side effects of a new strapless oral interface, the Oracle, with available nasal masks over 8 weeks of use for the treatment of obstructive sleep apnea hypopnea syndrome (OSAHS). A total of 38 patients with OSAHS (respiratory disturbance index (RDI) >/=15/h) were enrolled after the diagnostic polysomnogram for subsequent continuous positive airway pressure (CPAP) therapy. After randomization, therapeutic pressures during a titration study were determined for 21 patients in the oral group and 17 patients in the nasal group. Comparisons for nasal and oral interfaces were made for baseline patient characteristics, average hours of CPAP use, side effects from therapy, and among questionnaires evaluating patients' subjective responses to therapy at months 1 and 2. No significant difference was observed in the average hours of CPAP use between the oral (4.5+/-2.1; 5.5+/-2.6) and nasal groups (4.0+/-2.6; 4.8+/-2.5) for either month 1 or 2 (P>0.05). The dropout rates were similar for both groups after 8 weeks of therapy. However, patients in the nasal group had higher occurrences of side effects such as nasal congestion, dryness, and air leaks, whereas patients in the oral group experienced more oral dryness and gum pain. Oral delivery of CPAP with the Oracle is an effective and suitable alternative for patients with OSAHS.
NASA Astrophysics Data System (ADS)
Liritzis, Ioannis; Castro, Belen
2013-07-01
Keeping an exact calendar was important to schedule Delphic festivals. The proper day for a prophecy involved a meticulous calculation, which was carried out by learned priests and ancient philosophers. The month of Bysios on average is February, but in reality it could be any 30-day interval between January and March. Bysios starts with a New Moon, but the beginning of the month is not easily pinpointed and thus Bysios and the 7th day for giving an oracle cannot be identified according to the Gregorian calendar. The celestial motions of Lyra and Cygnus with regards to sunrise and sunset are related to the Delphi temple's orientation and the high altitude of steep cliffs of the Faidriades in front of it. Light from the rising Sun shines at the back of the temple where the statue of the god is located, while the appearance and disappearance of Lyra and Cygnus, two of Apollo's favorite constellations in the Delphic sky, mark the period of absence of the god to the Hyperboreans. This coincides with the 3-month interval from the end of December to the middle of March. During the later part of this period, on the 7th day of Bysios, the oracle was given. At any rate, the Delphic calendar was a lunar-solar-stellar one, and was properly adjusted to coincide with and preserve the seasonal movements of those constellations.
PMAG: Relational Database Definition
NASA Astrophysics Data System (ADS)
Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.
2002-12-01
The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the site location (latitude, longitude, elevation), geography (continent, country, region), geological setting (lithospheric plate or block, tectonic setting), geological age (age range, timescale name, stratigraphic position) and materials (rock type, classification, alteration state). Each data point and method description is also related to its peer-reviewed reference [citation ID] as archived in the EarthRef Reference Database (ERR). This guarantees direct traceability all the way to its original source, where the user can find the bibliography of each PMAG reference along with every abstract, data table, technical note and/or appendix that are available in digital form and that can be downloaded as PDF/JPEG images and Microsoft Excel/Word data files. This may help scientists and teachers in performing their research since they have easy access to all the scientific data. It also allows for checking potential errors during the digitization process. Please visit the PMAG website at http://earthref.org/PMAG/ for more information.
Landscape features, standards, and semantics in U.S. national topographic mapping databases
Varanka, Dalia
2009-01-01
The objective of this paper is to examine the contrast between local, field-surveyed topographical representation and feature representation in digital, centralized databases and to clarify their ontological implications. The semantics of these two approaches are contrasted by examining the categorization of features by subject domains inherent to national topographic mapping. When comparing five USGS topographic mapping domain and feature lists, results indicate that multiple semantic meanings and ontology rules were applied to the initial digital database, but were lost as databases became more centralized at national scales, and common semantics were replaced by technological terms.
Guillaumot, Charlène; Martin, Alexis; Fabri-Ruiz, Salomé; Eléaume, Marc; Saucède, Thomas
2016-01-01
The present dataset provides a case study for species distribution modelling (SDM) and for model testing in a poorly documented marine region. The dataset includes spatially-explicit data for echinoid (Echinodermata: Echinoidea) distribution. Echinoids were collected during oceanographic campaigns led around the Kerguelen Plateau (+63°/+81°E; -46°/-56°S) since 1872. In addition to the identification of collection specimens from historical cruises, original data from the recent campaigns POKER II (2010) and PROTEKER 2 to 4 (2013-2015) are also provided. In total, five families, ten genera, and 12 echinoid species are recorded in the region of the Kerguelen Plateau. The dataset is complemented with environmental descriptors available and relevant for echinoid ecology and SDM. The environmental data was compiled from different sources and was modified to suit the geographic extent of the Kerguelen Plateau, using scripts developed with the R language (R Core Team 2015). Spatial resolution was set at a common 0.1° pixel resolution. Mean seafloor and sea surface temperatures, salinity and their amplitudes, all derived from the World Ocean Database (Boyer et al. 2013) are made available for the six following decades: 1955-1964, 1965-1974, 1975-1984, 1985-1994, 1995-2004, 2005-2012. Future projections are provided for several parameters: they were modified from the Bio-ORACLE database (Tyberghein et al. 2012). They are based on three IPCC scenarii (B1, AIB, A2) for years 2100 and 2200 (IPCC, 4 th report).
Troutman, Sandra M.; Stanley, Richard G.
2003-01-01
This database and accompanying text depict historical and modern reported occurrences of petroleum both in wells and at the surface within the boundaries of the Central Alaska Province. These data were compiled from previously published and unpublished sources and were prepared for use in the 2002 U.S. Geological Survey petroleum assessment of Central Alaska, Yukon Flats region. Indications of petroleum are described as oil or gas shows in wells, oil or gas seeps, or outcrops of oil shale or oil-bearing rock and include confirmed and unconfirmed reports. The scale of the source map limits the spatial resolution (scale) of the database to 1:2,500,000 or smaller.
Assessing animal welfare in sow herds using data on meat inspection, medication and mortality.
Knage-Rasmussen, K M; Rousing, T; Sørensen, J T; Houe, H
2015-03-01
This paper aims to contribute to the development of a cost-effective alternative to expensive on-farm animal-based welfare assessment systems. The objective of the study was to design an animal welfare index based on central database information (DBWI), and to validate it against an animal welfare index based on-farm animal-based measurements (AWI). Data on 63 Danish sow herds with herd-sizes of 80 to 2500 sows and an average herd size of 501 were collected from three central databases containing: Meat inspection data collected at animal level in the abattoir, mortality data at herd level from the rendering plants of DAKA, and medicine records at both herd and animal group level (sow with piglets, weaners or finishers) from the central database Vetstat. Selected measurements taken from these central databases were used to construct the DBWI. The relative welfare impacts of both individual database measurements and the databases overall were assigned in consultation with a panel consisting of 12 experts. The experts were drawn from production advisory activities, animal science and in one case an animal welfare organization. The expert panel weighted each measurement on a scale from 1 (not-important) to 5 (very important). The experts also gave opinions on the relative weightings of measurements for each of the three databases by stating a relative weight of each database in the DBWI. On the basis of this, the aggregated DBWI was normalized. The aggregation of AWI was based on weighted summary of herd prevalence's of 20 clinical and behavioural measurements originating from a 1 day data collection. AWI did not show linear dependency of DBWI. This suggests that DBWI is not suited to replace an animal welfare index using on-farm animal-based measurements.
The Deference Due the Oracle: Computerized Text Analysis in a Basic Writing Class.
ERIC Educational Resources Information Center
Otte, George
1989-01-01
Describes how a computerized text analysis program can help students discover error patterns in their writing, and notes how students' responses to analyses can reduce errors and improve their writing. (MM)
17th Floor: A pedagogical oracle from/with Audre Lorde.
Gumbs, Alexis Pauline
2017-10-02
In 1974, warrior poet mother Audre Lorde published the poem "Blackstudies," a freeform dream villanelle about her complicated experience as a Black lesbian feminist English professor at the City University of New York during the dynamic period when students rose up in protest. The university granted open admissions, and cultural nationalists who taught at City University worked to create a Black Studies program. In the poem, she describes her vantage point at this particular historical and pedagogical moment from the seventeenth floor within a dreamscape where she navigates the stereotypes, silences, and urgencies that shaped her experience as an educator. 17 th Floor is a poetic oracle that contextualizes the ongoing work of "Blackstudies" (the poem and the practice), and for this reason, it should be activated as a resource for current Black and Brown lesbian educators and everyone who brings complexity and nuance to their teaching settings, their students, each other, and the world more broadly.
Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.
2000-01-01
In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915
Centralized database for interconnection system design. [for spacecraft
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.
NASA Technical Reports Server (NTRS)
Redemann, Jens; Wood, R.; Zuidema, P.; Diner, D.; Van Harten, G.; Xu, F.; Cairns, B.; Knobelspiesse, K.; Segal Rozenhaimer, M.
2017-01-01
Southern Africa produces almost a third of the Earths biomass burning (BB) aerosol particles. Particles lofted into the mid-troposphere are transported westward over the South-East (SE) Atlantic, home to one of the three permanent subtropical stratocumulus (Sc) cloud decks in the world. The SE Atlantic stratocumulus deck interacts with the dense layers of BB aerosols that initially overlay the cloud deck, but later subside and often mix into the clouds. These interactions include adjustments to aerosol-induced solar heating and microphysical effects, and their global representation in climate models remains one of the largest uncertainties in estimates of future climate. Hence, new observations over the SE Atlantic have significant implications for regional and global climate change predictions.The low-level clouds in the SE Atlantic have limited vertical extent and therefore present favorable conditions for their exploration with remote sensing. On the other hand, the normal coexistence of BB aerosols and Sc clouds in the same scene also presents significant challenges to conventional remote sensing techniques. We describe first results from NASAs airborne ORACLES (ObseRvations of Aerosols Above Clouds and Their IntEractionS) deployments in September 2016 and August 2017. We emphasize the unique role of polarimetric observations by two instruments, the Research Scanning Polarimeter (RSP) and the Airborne Multi-angle SpectroPolarimeter Imager (AirMSPI), and describe how these instruments help address specific ORACLES science objectives. Initial assessments of polarimetric observation accuracy for key cloud and aerosol properties will be presented, in as far as the preliminary nature of measurements permits.
Online Bibliographic Databases in South Central Pennsylvania: Current Status and Training Needs.
ERIC Educational Resources Information Center
Townley, Charles
A survey of libraries in south central Pennsylvania was designed to identify those that are using or planning to use databases and assess their perceived training needs. This report describes the methodology and analyzes the responses received form the 57 libraries that completed the questionnaire. Data presented in eight tables are concerned with…
Román Colón, Yomayra A.; Ruppert, Leslie F.
2015-01-01
The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.
Mallik, Saurav; Maulik, Ujjwal
2015-10-01
Gene ranking is an important problem in bioinformatics. Here, we propose a new framework for ranking biomolecules (viz., miRNAs, transcription-factors/TFs and genes) in a multi-informative uterine leiomyoma dataset having both gene expression and methylation data using (statistical) eigenvector centrality based approach. At first, genes that are both differentially expressed and methylated, are identified using Limma statistical test. A network, comprising these genes, corresponding TFs from TRANSFAC and ITFP databases, and targeter miRNAs from miRWalk database, is then built. The biomolecules are then ranked based on eigenvector centrality. Our proposed method provides better average accuracy in hub gene and non-hub gene classifications than other methods. Furthermore, pre-ranked Gene set enrichment analysis is applied on the pathway database as well as GO-term databases of Molecular Signatures Database with providing a pre-ranked gene-list based on different centrality values for comparing among the ranking methods. Finally, top novel potential gene-markers for the uterine leiomyoma are provided. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleary, Geoff
2014-09-08
This program exercises the robotic elements in Oracle Storage Tek tape libraries. This is useful for two known cases: 1.) shaking out marginal or new hardware by ensuring hardware robustness under high-duty usage. 2.) ensuring tape libraries are visually interesting during datacenter tours
Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier
2017-06-13
Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching activities across different countries and institutions. References to all CCTs available in BADERI can be readily submitted to CENTRAL for their potential inclusion in systematic reviews.
32 CFR 105.15 - Defense Sexual Assault Incident Database (DSAID).
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Defense Sexual Assault Incident Database (DSAID... Sexual Assault Incident Database (DSAID). (a) Purpose. (1) In accordance with section 563 of Public Law... activities. It shall serve as a centralized, case-level database for the collection and maintenance of...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
32 CFR 105.15 - Defense Sexual Assault Incident Database (DSAID).
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 1 2014-07-01 2014-07-01 false Defense Sexual Assault Incident Database (DSAID... Sexual Assault Incident Database (DSAID). (a) Purpose. (1) In accordance with section 563 of Public Law... activities. It shall serve as a centralized, case-level database for the collection and maintenance of...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
NASA Technical Reports Server (NTRS)
Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang
2009-01-01
Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.
A DICOM based radiotherapy plan database for research collaboration and reporting
NASA Astrophysics Data System (ADS)
Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.
2014-03-01
Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.
Turning Access into a web-enabled secure information system for clinical trials.
Dongquan Chen; Chen, Wei-Bang; Soong, Mayhue; Soong, Seng-Jaw; Orthner, Helmuth F
2009-08-01
Organizations that have limited resources need to conduct clinical studies in a cost-effective, but secure way. Clinical data residing in various individual databases need to be easily accessed and secured. Although widely available, digital certification, encryption, and secure web server, have not been implemented as widely, partly due to a lack of understanding of needs and concerns over issues such as cost and difficulty in implementation. The objective of this study was to test the possibility of centralizing various databases and to demonstrate ways of offering an alternative to a large-scale comprehensive and costly commercial product, especially for simple phase I and II trials, with reasonable convenience and security. We report a working procedure to transform and develop a standalone Access database into a secure Web-based secure information system. For data collection and reporting purposes, we centralized several individual databases; developed, and tested a web-based secure server using self-issued digital certificates. The system lacks audit trails. The cost of development and maintenance may hinder its wide application. The clinical trial databases scattered in various departments of an institution could be centralized into a web-enabled secure information system. The limitations such as the lack of a calendar and audit trail can be partially addressed with additional programming. The centralized Web system may provide an alternative to a comprehensive clinical trial management system.
Hand-held computer operating system program for collection of resident experience data.
Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J
2000-11-01
To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.
ERIC Educational Resources Information Center
Trotter, Andrew
1995-01-01
Constructivism, which holds that knowledge is created out of each individual's own experience, is recapturing researchers' attention. To constructivists, teachers are not omniscient oracles, but nutritionists providing an environment for children to grow their own knowledge. Students might learn division by planning a field trip instead of…
FastLane: An Agile Congestion Signaling Mechanism for Improving Datacenter Performance
2013-05-20
Cloudera, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft, NetApp, Oracle, Quanta, Samsung, Splunk, VMware and Yahoo...Web Services, Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloud- era, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft
Telereference Services: The Potential for Libraries.
ERIC Educational Resources Information Center
Rice, James
1983-01-01
Discussion of applications of teleconferencing, technology which allows information delivery and communication to take place through a television system, highlights three types of systems (videotext, teletext, fully interactive television); systems in use (CEEFAX, Oracle, Prestel, Telidon, Viewtron); public resistance; and telereference services…
CONTRACT ADMINISTRATIVE TRACKING SYSTEM (CATS)
The Contract Administrative Tracking System (CATS) was developed in response to an ORD NHEERL, Mid-Continent Ecology Division (MED)-recognized need for an automated tracking and retrieval system for Cost Reimbursable Level of Effort (CR/LOE) Contracts. CATS is an Oracle-based app...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
...; Information Collection; Central Contractor Registration AGENCY: Department of Defense (DOD), General Services... requirement concerning the Central Contractor Registration database. Public comments are particularly invited... Information Collection 9000- 0159, Central Contractor Registration, by any of the following methods...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
...; Information Collection; Central Contractor Registration AGENCIES: Department of Defense (DOD), General... collection requirement concerning the Central Contractor Registration database. A notice was published in the... Information Collection 9000- 0159, Central Contractor Registration, by any of the following methods...
Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela
2012-01-01
In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.
Analysis of the Finite Precision s-Step Biconjugate Gradient Method
2014-03-13
Center for Future Architecture Research, a member of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA, and ASPIRE Lab...industrial sponsors and affiliates Intel, Google, Nokia, NVIDIA , Oracle, and Samsung. Any opinions, findings, conclusions, or recommendations in this
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grahn, D.; Wright, B.J.; Carnes, B.A.
A research reactor for exclusive use in experimental radiobiology was designed and built at Argonne National Laboratory in the 1960`s. It was located in a special addition to Building 202, which housed the Division of Biological and Medical Research. Its location assured easy access for all users to the animal facilities, and it was also near the existing gamma-irradiation facilities. The water-cooled, heterogeneous 200-kW(th) reactor, named JANUS, became the focal point for a range of radiobiological studies gathered under the rubic of {open_quotes}the JANUS program{close_quotes}. The program ran from about 1969 to 1992 and included research at all levels ofmore » biological organization, from subcellular to organism. More than a dozen moderate- to large-scale studies with the B6CF{sub 1} mouse were carried out; these focused on the late effects of whole-body exposure to gamma rays or fission neutrons, in matching exposure regimes. In broad terms, these studies collected data on survival and on the pathology observed at death. A deliberate effort was made to establish the cause of death. This archieve describes these late-effects studies and their general findings. The database includes exposure parameters, time of death, and the gross pathology and histopathology in codified form. A series of appendices describes all pathology procedures and codes, treatment or irradiation codes, and the manner in which the data can be accessed in the ORACLE database management system. A series of tables also presents summaries of the individual experiments in terms of radiation quality, sample sizes at entry, mean survival times by sex, and number of gross pathology and histopathology records.« less
Weissensteiner, Hansi; Schönherr, Sebastian; Specht, Günther; Kronenberg, Florian; Brandstätter, Anita
2010-03-09
Mitochondrial DNA (mtDNA) is widely being used for population genetics, forensic DNA fingerprinting and clinical disease association studies. The recent past has uncovered severe problems with mtDNA genotyping, not only due to the genotyping method itself, but mainly to the post-lab transcription, storage and report of mtDNA genotypes. eCOMPAGT, a system to store, administer and connect phenotype data to all kinds of genotype data is now enhanced by the possibility of storing mtDNA profiles and allowing their validation, linking to phenotypes and export as numerous formats. mtDNA profiles can be imported from different sequence evaluation programs, compared between evaluations and their haplogroup affiliations stored. Furthermore, eCOMPAGT has been improved in its sophisticated transparency (support of MySQL and Oracle), security aspects (by using database technology) and the option to import, manage and store genotypes derived from various genotyping methods (SNPlex, TaqMan, and STRs). It is a software solution designed for project management, laboratory work and the evaluation process all-in-one. The extended mtDNA version of eCOMPAGT was designed to enable error-free post-laboratory data handling of human mtDNA profiles. This software is suited for small to medium-sized human genetic, forensic and clinical genetic laboratories. The direct support of MySQL and the improved database security options render eCOMPAGT a powerful tool to build an automated workflow architecture for several genotyping methods. eCOMPAGT is freely available at http://dbis-informatik.uibk.ac.at/ecompagt.
2010-01-01
Background Mitochondrial DNA (mtDNA) is widely being used for population genetics, forensic DNA fingerprinting and clinical disease association studies. The recent past has uncovered severe problems with mtDNA genotyping, not only due to the genotyping method itself, but mainly to the post-lab transcription, storage and report of mtDNA genotypes. Description eCOMPAGT, a system to store, administer and connect phenotype data to all kinds of genotype data is now enhanced by the possibility of storing mtDNA profiles and allowing their validation, linking to phenotypes and export as numerous formats. mtDNA profiles can be imported from different sequence evaluation programs, compared between evaluations and their haplogroup affiliations stored. Furthermore, eCOMPAGT has been improved in its sophisticated transparency (support of MySQL and Oracle), security aspects (by using database technology) and the option to import, manage and store genotypes derived from various genotyping methods (SNPlex, TaqMan, and STRs). It is a software solution designed for project management, laboratory work and the evaluation process all-in-one. Conclusions The extended mtDNA version of eCOMPAGT was designed to enable error-free post-laboratory data handling of human mtDNA profiles. This software is suited for small to medium-sized human genetic, forensic and clinical genetic laboratories. The direct support of MySQL and the improved database security options render eCOMPAGT a powerful tool to build an automated workflow architecture for several genotyping methods. eCOMPAGT is freely available at http://dbis-informatik.uibk.ac.at/ecompagt. PMID:20214782
NASA Astrophysics Data System (ADS)
Minnett, R.; Jarboe, N.; Koppers, A. A.; Tauxe, L.; Constable, C.
2013-12-01
EarthRef.org is a geoscience umbrella website for several databases and data and model repository portals. These portals, unified in the mandate to preserve their respective data and promote scientific collaboration in their fields, are also disparate in their schemata. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo- and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples and relies on a partially strict subsumptive hierarchical data model. The Geochemical Earth Reference Model (http://earthref.org/GERM/) portal focuses on the chemical characterization of the Earth and relies on two data schemata: a repository of peer-reviewed reservoir geochemistry, and a database of partition coefficients for rocks, minerals, and elements. The Seamount Biogeosciences Network (http://earthref.org/SBN/) encourages the collaboration between the diverse disciplines involved in seamount research and includes the Seamount Catalog (http://earthref.org/SC/) of bathymetry and morphology. All of these portals also depend on the EarthRef Reference Database (http://earthref.org/ERR/) for publication reference metadata and the EarthRef Digital Archive (http://earthref.org/ERDA/), a generic repository of data objects and their metadata. The development of the new MagIC Search Interface (http://earthref.org/MagIC/search/) centers on a reusable platform designed to be flexible enough for largely heterogeneous datasets and to scale up to datasets with tens of millions of records. The HTML5 web application and Oracle 11g database residing at the San Diego Supercomputer Center (SDSC) support the online contribution and editing of complex datasets in a spreadsheet environment and the browsing and filtering of these contributions in the context of thousands of other datasets. EarthRef.org is in the process of implementing this platform across all of its data portals in spite of the wide variety of data schemata and is dedicated to serving the geoscience community with as little effort from the end-users as possible.
Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.; Thurman, David A.
1999-01-01
AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the case base, the user interface (including the category structure), interface with the current DAAC Help System, the development of tools to add new cases, and user testing and evaluation at (perhaps) the Goddard DAAC.
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Koppers, A.; Constable, C.; Tauxe, L.; Jonestrask, L.
2017-12-01
The Magnetics Information Consortium (MagIC) supports an online database for the paleo, geo, and rock magnetic communities ( https://earthref.org/MagIC ). Researchers can upload data into the archive and download data as selected with a sophisticated search system. MagIC has completed the transition from an Oracle backed, Perl based, server oriented website to an ElasticSearch backed, Meteor based thick client website technology stack. Using JavaScript on both the sever and the client enables increased code reuse and allows easy offloading many computational operations to the client for faster response. On-the-fly data validation, column header suggestion, and spreadsheet online editing are some new features available with the new system. The 3.0 data model, method codes, and vocabulary lists can be browsed via the MagIC website and more easily updated. Source code for MagIC is publicly available on GitHub ( https://github.com/earthref/MagIC ). The MagIC file format is natively compatible with the PmagPy ( https://github.com/PmagPy/PmagPy) paleomagnetic analysis software. MagIC files can now be downloaded from the database and viewed and interpreted in the PmagPy GUI based tool, pmag_gui. Changes or interpretations of the data can then be saved by pmag_gui in the MagIC 3.0 data format and easily uploaded to the MagIC database. The rate of new contributions to the database has been increasing with many labs contributing measurement level data for the first time in the last year. Over a dozen file format conversion scripts are available for translating non-MagIC measurement data files into the MagIC format for easy uploading. We will continue to work with more labs until the whole community has a manageable workflow for contributing their measurement level data. MagIC will continue to provide a global repository for archiving and retrieving paleomagnetic and rock magnetic data and, with the new system in place, be able to more quickly respond to the community's requests for changes and improvements.
Introducing the Forensic Research/Reference on Genetics knowledge base, FROG-kb
2012-01-01
Background Online tools and databases based on multi-allelic short tandem repeat polymorphisms (STRPs) are actively used in forensic teaching, research, and investigations. The Fst value of each CODIS marker tends to be low across the populations of the world and most populations typically have all the common STRP alleles present diminishing the ability of these systems to discriminate ethnicity. Recently, considerable research is being conducted on single nucleotide polymorphisms (SNPs) to be considered for human identification and description. However, online tools and databases that can be used for forensic research and investigation are limited. Methods The back end DBMS (Database Management System) for FROG-kb is Oracle version 10. The front end is implemented with specific code using technologies such as Java, Java Servlet, JSP, JQuery, and GoogleCharts. Results We present an open access web application, FROG-kb (Forensic Research/Reference on Genetics-knowledge base, http://frog.med.yale.edu), that is useful for teaching and research relevant to forensics and can serve as a tool facilitating forensic practice. The underlying data for FROG-kb are provided by the already extensively used and referenced ALlele FREquency Database, ALFRED (http://alfred.med.yale.edu). In addition to displaying data in an organized manner, computational tools that use the underlying allele frequencies with user-provided data are implemented in FROG-kb. These tools are organized by the different published SNP/marker panels available. This web tool currently has implemented general functions possible for two types of SNP panels, individual identification and ancestry inference, and a prediction function specific to a phenotype informative panel for eye color. Conclusion The current online version of FROG-kb already provides new and useful functionality. We expect FROG-kb to grow and expand in capabilities and welcome input from the forensic community in identifying datasets and functionalities that will be most helpful and useful. Thus, the structure and functionality of FROG-kb will be revised in an ongoing process of improvement. This paper describes the state as of early June 2012. PMID:22938150
Robust Variable Selection with Exponential Squared Loss.
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-04-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.
Robust Variable Selection with Exponential Squared Loss
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-01-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996
Freedman, Mark S; Leist, Thomas P; Comi, Giancarlo; Cree, Bruce Ac; Coyle, Patricia K; Hartung, Hans-Peter; Vermersch, Patrick; Damian, Doris; Dangond, Fernando
2017-01-01
Multiple sclerosis (MS) diagnostic criteria have changed since the ORACLE-MS study was conducted; 223 of 616 patients (36.2%) would have met the diagnosis of MS vs clinically isolated syndrome (CIS) using the newer criteria. The objective of this paper is to assess the effect of cladribine tablets in patients with a first clinical demyelinating attack fulfilling newer criteria (McDonald 2010) for MS vs CIS. A post hoc analysis for subgroups of patients retrospectively classified as fulfilling or not fulfilling newer criteria at the first clinical demyelinating attack was conducted. Cladribine tablets 3.5 mg/kg ( n = 68) reduced the risk of next attack or three-month confirmed Expanded Disability Status Scale (EDSS) worsening by 74% vs placebo ( n = 72); p = 0.0009 in patients meeting newer criteria for MS at baseline. Cladribine tablets 5.25 mg/kg ( n = 83) reduced the risk of next attack or three-month confirmed EDSS worsening by 37%, but nominal significance was not reached ( p = 0.14). In patients who were still CIS after applying newer criteria, cladribine tablets 3.5 mg/kg ( n = 138) reduced the risk of conversion to clinically definite multiple sclerosis (CDMS) by 63% vs placebo ( n = 134); p = 0.0003. Cladribine tablets 5.25 mg/kg ( n = 121) reduced the risk of conversion by 75% vs placebo ( n = 134); p < 0.0001. Regardless of the criteria used to define CIS or MS, 3.5 mg/kg cladribine tablets are effective in patients with a first clinical demyelinating attack. ClinicalTrials.gov registration: The ORACLE-MS study (NCT00725985).
Pike, Katie; Brocklehurst, Peter; Jones, David; Kenyon, Sarah; Salt, Alison; Taylor, David; Marlow, Neil
2012-09-01
Within the ORACLE Children Study Cohort, the authors have evaluated long-term consequences of the diagnosis of confirmed or suspected neonatal necrotising enterocolitis (NEC) at age of 7 years. Outcomes were assessed using a parental questionnaire, including the Health Utilities Index (HUI-3) to assess functional impairment, and specific medical and behavioural outcomes. Educational outcomes for children in England were explored using national standardised tests. Multiple logistic regression was used to explore independent associates of NEC within the cohort. The authors obtained data for 119 (77%) of 157 children following proven or suspected NEC and compared their outcomes with those of the remaining 6496 children. NEC was associated with an increase in risk of neonatal death (OR 14.6 (95% CI 10.4 to 20.6)). At 7 years, NEC conferred an increased risk of all grades of impairment. Adjusting for confounders, risks persisted for any HUI-3 defined functional impairment (adjusted OR 1.55 (1.05, 2.29)), particularly mild impairment (adjusted OR 1.61 (1.03, 2.53)) both in all NEC children and in those with proven NEC, which appeared to be independent. No behavioural or educational associations were confirmed. Following NEC, children were more likely to suffer bowel problems than non-NEC children (adjusted OR 3.96 (2.06, 7.61)). The ORACLE Children Study provided opportunity for the largest evaluation of school age outcome following neonatal NEC and demonstrates significant long-term consequences of both gut function (presence of stoma, admission for bowel problems and continuing medical care for gut-related problems) and motor, sensory and cognitive outcomes as measured using HUI-3.
Cloud Condensation Nuclei Measurements During the First Year of the ORACLES Study
NASA Astrophysics Data System (ADS)
Kacarab, M.; Howell, S. G.; Wood, R.; Redemann, J.; Nenes, A.
2016-12-01
Aerosols have significant impacts on air quality and climate. Their ability to scatter and absorb radiation and to act as cloud condensation nuclei (CCN) plays a very important role in the global climate. Biomass burning organic aerosol (BBOA) can drastically elevate the concentration of CCN in clouds, but the response in droplet number may be strongly suppressed (or even reversed) owing to low supersaturations that may develop from the strong competition of water vapor (Bougiatioti et al. 2016). Understanding and constraining the magnitude of droplet response to biomass burning plumes is an important component of the aerosol-cloud interaction problem. The southeastern Atlantic (SEA) cloud deck provides a unique opportunity to study these cloud-BBOA interactions for marine stratocumulus, as it is overlain by a large, optically thick biomass burning aerosol plume from Southern Africa during the burning season. The interaction between these biomass burning aerosols and the SEA cloud deck is being investigated in the NASA ObseRvations of Aerosols above Clouds and their intEractionS (ORACLES) study. The CCN activity of aerosol around the SEA cloud deck and associated biomass burning plume was evaluated during the first year of the ORACLES study with direct measurements of CCN concentration, aerosol size distribution and composition onboard the NASA P-3 aircraft during August and September of 2016. Here we present analysis of the observed CCN activity of the BBOA aerosol in and around the SEA cloud deck and its relationship to aerosol size, chemical composition, and plume mixing and aging. We also evaluate the predicted and observed droplet number sensitivity to the aerosol fluctuations and quantify, using the data, the drivers of droplet number variability (vertical velocity or aerosol properties) as a function of biomass burning plume characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Edward J., Jr.; Henry, Karen Lynne
Sandia National Laboratories develops technologies to: (1) sustain, modernize, and protect our nuclear arsenal (2) Prevent the spread of weapons of mass destruction; (3) Provide new capabilities to our armed forces; (4) Protect our national infrastructure; (5) Ensure the stability of our nation's energy and water supplies; and (6) Defend our nation against terrorist threats. We identified the need for a single overarching Integrated Workplace Management System (IWMS) that would enable us to focus on customer missions and improve FMOC processes. Our team selected highly configurable commercial-off-the-shelf (COTS) software with out-of-the-box workflow processes that integrate strategic planning, project management, facilitymore » assessments, and space management, and can interface with existing systems, such as Oracle, PeopleSoft, Maximo, Bentley, and FileNet. We selected the Integrated Workplace Management System (IWMS) from Tririga, Inc. Facility Management System (FMS) Benefits are: (1) Create a single reliable source for facility data; (2) Improve transparency with oversight organizations; (3) Streamline FMOC business processes with a single, integrated facility-management tool; (4) Give customers simple tools and real-time information; (5) Reduce indirect costs; (6) Replace approximately 30 FMOC systems and 60 homegrown tools (such as Microsoft Access databases); and (7) Integrate with FIMS.« less
A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality
NASA Astrophysics Data System (ADS)
Wang, Manyi; Liu, Chaoshun; Gao, Wei
2014-10-01
An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.
Flexible data registration and automation in semiconductor production
NASA Astrophysics Data System (ADS)
Dudde, Ralf; Staudt-Fischbach, Peter; Kraemer, Benedict
1997-08-01
The need for cost reduction and flexibility in semiconductor production will result in a wider application of computer based automation systems. With the setup of a new and advanced CMOS semiconductor line in the Fraunhofer Institute for Silicon Technology [ISIT, Itzehoe (D)] a new line information system (LIS) was introduced based on an advanced model for the underlying data structure. This data model was implemented into an ORACLE-RDBMS. A cellworks based system (JOSIS) was used for the integration of the production equipment, communication and automated database bookings and information retrievals. During the ramp up of the production line this new system is used for the fab control. The data model and the cellworks based system integration is explained. This system enables an on-line overview of the work in progress in the fab, lot order history and equipment status and history. Based on this figures improved production and cost monitoring and optimization is possible. First examples of the information gained by this system are presented. The modular set-up of the LIS system will allow easy data exchange with additional software tools like scheduler, different fab control systems like PROMIS and accounting systems like SAP. Modifications necessary for the integration of PROMIS are described.
Establishment of an international database for genetic variants in esophageal cancer.
Vihinen, Mauno
2016-10-01
The establishment of a database has been suggested in order to collect, organize, and distribute genetic information about esophageal cancer. The World Organization for Specialized Studies on Diseases of the Esophagus and the Human Variome Project will be in charge of a central database of information about esophageal cancer-related variations from publications, databases, and laboratories; in addition to genetic details, clinical parameters will also be included. The aim will be to get all the central players in research, clinical, and commercial laboratories to contribute. The database will follow established recommendations and guidelines. The database will require a team of dedicated curators with different backgrounds. Numerous layers of systematics will be applied to facilitate computational analyses. The data items will be extensively integrated with other information sources. The database will be distributed as open access to ensure exchange of the data with other databases. Variations will be reported in relation to reference sequences on three levels--DNA, RNA, and protein-whenever applicable. In the first phase, the database will concentrate on genetic variations including both somatic and germline variations for susceptibility genes. Additional types of information can be integrated at a later stage. © 2016 New York Academy of Sciences.
Biomass Burning Organic Aerosol as a Modulator of Droplet Number in the Southern Atlantic
NASA Astrophysics Data System (ADS)
Kacarab, M.; Howell, S. G.; Small Griswold, J. D.; Thornhill, K. L., II; Wood, R.; Redemann, J.; Nenes, A.
2017-12-01
Aerosols play a significant yet highly variable role in local and global air quality and climate. They act as cloud condensation nuclei (CCN) and both scatter and absorb radiation, lending a large source of uncertainty to climate predictions. Biomass burning organic aerosol (BBOA) can drastically elevate CCN concentrations, but the response in cloud droplet number may be suppressed or even reversed due to low supersaturations that develop from strong competition for water vapor. Constraining droplet response to BBOA is a key factor to understanding aerosol-cloud interactions. The southeastern Atlantic (SEA) cloud deck off the west coast of central Africa is a prime opportunity to study these cloud-BBOA interactions for marine stratocumulus as during winter in the southern hemisphere the SEA cloud deck is overlain by a large, optically thick BBOA plume. The NASA ObseRvations of Aerosols above Clouds and their intEractionS (ORACLES) study focuses on increasing the understanding of how these BBOA affect the SEA cloud deck. Measurements of CCN concentration, aerosol size distribution and composition, updraft velocities, and cloud droplet number in and around the SEA cloud deck and associated BBOA plume were taken aboard the NASA P-3 aircraft during the first two years of the ORACLES campaign in September 2016 and August 2017. Here we evaluate the predicted and observed droplet number sensitivity to the aerosol fluctuations and quantify, using the data, the drivers of droplet number variability (vertical velocity or aerosol properties) as a function of biomass burning plume characteristics. Over the course of the campaign, different levels of BBOA influence in the marine boundary layer (MBL) were observed, allowing for comparison of cloud droplet number, hygroscopicity parameter (κ), and maximum in-cloud supersaturation over a range of "clean" and "dirty" conditions. Droplet number sensitivity to aerosol concentration, κ, and vertical updraft velocities are also evaluated. Generally, an increase in BBOA led to increased droplet number along with decreased κ and maximum in-cloud supersaturation (leading to an increase in competition for water vapor). This work seeks to contribute to an increased understanding of how CCN and aerosol properties affect the radiative and hydrological properties and impact of the cloud.
ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.
Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y
2008-08-12
New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.
Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu
2003-11-07
To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s).
Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu
2003-01-01
Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s). PMID:14604444
Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms
NASA Astrophysics Data System (ADS)
Kumar, K.; Ledoux, H.; Stoter, J.
2016-06-01
Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.
Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less
Tasneem, Asba; Aberle, Laura; Ananth, Hari; Chakraborty, Swati; Chiswell, Karen; McCourt, Brian J.; Pietrobon, Ricardo
2012-01-01
Background The ClinicalTrials.gov registry provides information regarding characteristics of past, current, and planned clinical studies to patients, clinicians, and researchers; in addition, registry data are available for bulk download. However, issues related to data structure, nomenclature, and changes in data collection over time present challenges to the aggregate analysis and interpretation of these data in general and to the analysis of trials according to clinical specialty in particular. Improving usability of these data could enhance the utility of ClinicalTrials.gov as a research resource. Methods/Principal Results The purpose of our project was twofold. First, we sought to extend the usability of ClinicalTrials.gov for research purposes by developing a database for aggregate analysis of ClinicalTrials.gov (AACT) that contains data from the 96,346 clinical trials registered as of September 27, 2010. Second, we developed and validated a methodology for annotating studies by clinical specialty, using a custom taxonomy employing Medical Subject Heading (MeSH) terms applied by an NLM algorithm, as well as MeSH terms and other disease condition terms provided by study sponsors. Clinical specialists reviewed and annotated MeSH and non-MeSH disease condition terms, and an algorithm was created to classify studies into clinical specialties based on both MeSH and non-MeSH annotations. False positives and false negatives were evaluated by comparing algorithmic classification with manual classification for three specialties. Conclusions/Significance The resulting AACT database features study design attributes parsed into discrete fields, integrated metadata, and an integrated MeSH thesaurus, and is available for download as Oracle extracts (.dmp file and text format). This publicly-accessible dataset will facilitate analysis of studies and permit detailed characterization and analysis of the U.S. clinical trials enterprise as a whole. In addition, the methodology we present for creating specialty datasets may facilitate other efforts to analyze studies by specialty groups. PMID:22438982
Tasneem, Asba; Aberle, Laura; Ananth, Hari; Chakraborty, Swati; Chiswell, Karen; McCourt, Brian J; Pietrobon, Ricardo
2012-01-01
The ClinicalTrials.gov registry provides information regarding characteristics of past, current, and planned clinical studies to patients, clinicians, and researchers; in addition, registry data are available for bulk download. However, issues related to data structure, nomenclature, and changes in data collection over time present challenges to the aggregate analysis and interpretation of these data in general and to the analysis of trials according to clinical specialty in particular. Improving usability of these data could enhance the utility of ClinicalTrials.gov as a research resource. The purpose of our project was twofold. First, we sought to extend the usability of ClinicalTrials.gov for research purposes by developing a database for aggregate analysis of ClinicalTrials.gov (AACT) that contains data from the 96,346 clinical trials registered as of September 27, 2010. Second, we developed and validated a methodology for annotating studies by clinical specialty, using a custom taxonomy employing Medical Subject Heading (MeSH) terms applied by an NLM algorithm, as well as MeSH terms and other disease condition terms provided by study sponsors. Clinical specialists reviewed and annotated MeSH and non-MeSH disease condition terms, and an algorithm was created to classify studies into clinical specialties based on both MeSH and non-MeSH annotations. False positives and false negatives were evaluated by comparing algorithmic classification with manual classification for three specialties. The resulting AACT database features study design attributes parsed into discrete fields, integrated metadata, and an integrated MeSH thesaurus, and is available for download as Oracle extracts (.dmp file and text format). This publicly-accessible dataset will facilitate analysis of studies and permit detailed characterization and analysis of the U.S. clinical trials enterprise as a whole. In addition, the methodology we present for creating specialty datasets may facilitate other efforts to analyze studies by specialty groups.
[Research on Zhejiang blood information network and management system].
Yan, Li-Xing; Xu, Yan; Meng, Zhong-Hua; Kong, Chang-Hong; Wang, Jian-Min; Jin, Zhen-Liang; Wu, Shi-Ding; Chen, Chang-Shui; Luo, Ling-Fei
2007-02-01
This research was aimed to develop the first level blood information centralized database and real time communication network at a province area in China. Multiple technology like local area network database separate operation, real time data concentration and distribution mechanism, allopatric backup, and optical fiber virtual private network (VPN) were used. As a result, the blood information centralized database and management system were successfully constructed, which covers all the Zhejiang province, and the real time exchange of blood data was realised. In conclusion, its implementation promote volunteer blood donation and ensure the blood safety in Zhejiang, especially strengthen the quick response to public health emergency. This project lays the first stone of centralized test and allotment among blood banks in Zhejiang, and can serve as a reference of contemporary blood bank information systems in China.
Preparing College Students To Search Full-Text Databases: Is Instruction Necessary?
ERIC Educational Resources Information Center
Riley, Cheryl; Wales, Barbara
Full-text databases allow Central Missouri State University's clients to access some of the serials that libraries have had to cancel due to escalating subscription costs; EbscoHost, the subject of this study, is one such database. The database is available free to all Missouri residents. A survey was designed consisting of 21 questions intended…
NASA Astrophysics Data System (ADS)
Castagnoli, Giuseppe
2018-03-01
The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.
Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis
2012-05-01
In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.
Variable Selection for Support Vector Machines in Moderately High Dimensions
Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze
2015-01-01
Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916
Fusion set selection with surrogate metric in multi-atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-02-01
Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.
A bittersweet story: the true nature of the laurel of the Oracle of Delphi.
Harissis, Haralampos V
2014-01-01
It is known from ancient sources that "laurel," identified with sweet bay, was used at the ancient Greek oracle of Delphi. The Pythia, the priestess who spoke the prophecies, purportedly used laurel as a means to inspire her divine frenzy. However, the clinical symptoms of the Pythia, as described in ancient sources, cannot be attributed to the use of sweet bay, which is harmless. A review of contemporary toxicological literature indicates that it is oleander that causes symptoms similar to those of the Pythia, while a closer examination of ancient literary texts indicates that oleander was often included under the generic term laurel. It is therefore likely that it was oleander, not sweet bay, that the Pythia used before the oracular procedure. This explanation could also shed light on other ancient accounts regarding the alleged spirit and chasm of Delphi, accounts that have been the subject of intense debate and interdisciplinary research for the last hundred years.
Reliability of programs specified with equational specifications
NASA Astrophysics Data System (ADS)
Nikolik, Borislav
Ultrareliability is desirable (and sometimes a demand of regulatory authorities) for safety-critical applications, such as commercial flight-control programs, medical applications, nuclear reactor control programs, etc. A method is proposed, called the Term Redundancy Method (TRM), for obtaining ultrareliable programs through specification-based testing. Current specification-based testing schemes need a prohibitively large number of testcases for estimating ultrareliability. They assume availability of an accurate program-usage distribution prior to testing, and they assume the availability of a test oracle. It is shown how to obtain ultrareliable programs (probability of failure near zero) with a practical number of testcases, without accurate usage distribution, and without a test oracle. TRM applies to the class of decision Abstract Data Type (ADT) programs specified with unconditional equational specifications. TRM is restricted to programs that do not exceed certain efficiency constraints in generating testcases. The effectiveness of TRM in failure detection and recovery is demonstrated on formulas from the aircraft collision avoidance system TCAS.
Dennis M. May
1998-01-01
Discusses a regional composite approach to managing timber product output data in a relational database. Describes the development and structure of the regional composite database and demonstrates its use in addressing everyday timber product output information needs.
Geochronology Database for Central Colorado
Klein, T.L.; Evans, K.V.; deWitt, E.H.
2010-01-01
This database is a compilation of published and some unpublished isotopic and fission track age determinations in central Colorado. The compiled area extends from the southern Wyoming border to the northern New Mexico border and from approximately the longitude of Denver on the east to Gunnison on the west. Data for the tephrochronology of Pleistocene volcanic ash, carbon-14, Pb-alpha, common-lead, and U-Pb determinations on uranium ore minerals have been excluded.
Virtual Queue in a Centralized Database Environment
NASA Astrophysics Data System (ADS)
Kar, Amitava; Pal, Dibyendu Kumar
2010-10-01
Today is the era of the Internet. Every matter whether it be a gather of knowledge or planning a holiday or booking of ticket etc everything can be obtained from the internet. This paper intends to calculate the different queuing measures when some booking or purchase is done through the internet subject to the limitations in the number of tickets or seats. It involves a lot of database activities like read and write. This paper takes care of the time involved in the requests of a service, taken as arrival and the time involved in providing the required information, taken as service and thereby tries to calculate the distribution of arrival and service and the various measures of the queuing. This paper considers the database as centralized database for the sake of simplicity as the alternating concept of distributed database would rather complicate the calculation.
Adopting a corporate perspective on databases. Improving support for research and decision making.
Meistrell, M; Schlehuber, C
1996-03-01
The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.
77 FR 39687 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
..., Form, and OMB Number: Defense Sexual Assault Incident Database (DSAID); OMB Control Number 0704-0482... sexual assault data collected by the Military Services. This database shall be a centralized, case-level database for the uniform collection of data regarding incidence of sexual assaults involving persons...
Code of Federal Regulations, 2010 CFR
2010-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
Code of Federal Regulations, 2014 CFR
2014-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
Code of Federal Regulations, 2012 CFR
2012-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
Code of Federal Regulations, 2012 CFR
2012-10-01
... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...
Code of Federal Regulations, 2013 CFR
2013-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
48 CFR 52.204-7 - Central Contractor Registration.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (CCR) database means the primary Government repository for Contractor information required for the...) for the same concern. Registered in the CCR database means that— (1) The Contractor has entered all... Federal Funding Accountability and Transparency Act of 2006 (see subpart 4.14), into the CCR database; and...
Code of Federal Regulations, 2010 CFR
2010-10-01
... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...
Code of Federal Regulations, 2011 CFR
2011-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
Code of Federal Regulations, 2011 CFR
2011-10-01
... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...
The Chemical Aquatic Fate and Effects (CAFE) database, developed by NOAA’s Emergency Response Division (ERD), is a centralized data repository that allows for unrestricted access to fate and effects data. While this database was originally designed to help support decisions...
Multi-Resource Fair Queueing for Packet Processing
2012-06-19
Huawei , Intel, MarkLogic, Microsoft, NetApp, Oracle, Quanta, Splunk, VMware and by DARPA (contract #FA8650-11-C-7136). Multi-Resource Fair Queueing for...Google PhD Fellowship, gifts from Amazon Web Services, Google, SAP, Blue Goji, Cisco, Cloud- era, Ericsson, General Electric, Hewlett Packard, Huawei
Computer Aided Management for Information Processing Projects.
ERIC Educational Resources Information Center
Akman, Ibrahim; Kocamustafaogullari, Kemal
1995-01-01
Outlines the nature of information processing projects and discusses some project management programming packages. Describes an in-house interface program developed to utilize a selected project management package (TIMELINE) by using Oracle Data Base Management System tools and Pascal programming language for the management of information system…
Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.
ERIC Educational Resources Information Center
Borman, Stuart A.
1982-01-01
Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…
Apollo: Changing the Way We Work.
ERIC Educational Resources Information Center
Schroeder, John R.; Bleed, Ron
In January 1994, Arizona's Maricopa Community College District issued a request for proposals to develop new administrative software applications to solve problems related to high maintenance costs for existing systems and difficulties in updating software. The result was the Apollo Project, in which the District contracted with Oracle Corporation…
A resolution congratulating Oracle Team USA for winning the 34th America's Cup.
Sen. Feinstein, Dianne [D-CA
2013-11-05
Senate - 11/05/2013 Submitted in the Senate, considered, and agreed to without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:
Leon, Antonette E; Fabricio, Aline S C; Benvegnù, Fabio; Michilin, Silvia; Secco, Annamaria; Spangaro, Omar; Meo, Sabrina; Gion, Massimo
2011-01-01
The Nanosized Cancer Polymarker Biochip Project (RBLA03S4SP) funded by an Italian MIUR-FIRB grant (Italian Ministry of University and Research - Investment Funds for Basic Research) has led to the creation of a free-access dynamic website, available at the web address https://serviziweb.ulss12.ve.it/firbabo, and of a centralized database with password-restricted access. The project network is composed of 9 research units (RUs) and has been active since 2005. The aim of the FIRB project was the design, production and validation of optoelectronic and chemoelectronic biosensors for the simultaneous detection of a novel class of cancer biomarkers associated with immunoglobulins of the M class (IgM) for early diagnosis of cancer. Biomarker immune complexes (BM-ICs) were assessed on samples of clinical cases and matched controls for breast, colorectal, liver, ovarian and prostate malignancies. This article describes in detail the architecture of the project website, the central database application, and the biobank developed for the FIRB Nanosized Cancer Polymarker Biochip Project. The article also illustrates many unique aspects that should be considered when developing a database within a multidisciplinary scenario. The main deliverables of the project were numerous, including the development of an online database which archived 1400 case report forms (700 cases and 700 matched controls) and more than 2700 experimental results relative to the BM-ICs assayed. The database also allowed for the traceability and retrieval of 21,000 aliquots archived in the centralized bank and stored as backup in the RUs, and for the development of a centralized biological bank in the coordinating unit with 6300 aliquots of serum. The constitution of the website and biobank database enabled optimal coordination of the RUs involved, highlighting the importance of sharing samples and scientific data in a multicenter setting for the achievement of the project goals.
Kenyon, S; Pike, K; Jones, D R; Brocklehurst, P; Marlow, N; Salt, A; Taylor, D J
2008-10-11
The ORACLE II trial compared the use of erythromycin and/or amoxicillin-clavulanate (co-amoxiclav) with that of placebo for women in spontaneous preterm labour and intact membranes, without overt signs of clinical infection, by use of a factorial randomised design. The aim of the present study--the ORACLE Children Study II--was to determine the long-term effects on children after exposure to antibiotics in this clinical situation. We assessed children at age 7 years born to the 4221 women who had completed the ORACLE II study and who were eligible for follow-up with a structured parental questionnaire to assess the child's health status. Functional impairment was defined as the presence of any level of functional impairment (severe, moderate, or mild) derived from the mark III Multi-Attribute Health Status classification system. Educational outcomes were assessed with national curriculum test results for children resident in England. Outcome was determined for 3196 (71%) eligible children. Overall, a greater proportion of children whose mothers had been prescribed erythromycin, with or without co-amoxiclav, had any functional impairment than did those whose mothers had received no erythromycin (658 [42.3%] of 1554 children vs 574 [38.3%] of 1498; odds ratio 1.18, 95% CI 1.02-1.37). Co-amoxiclav (with or without erythromycin) had no effect on the proportion of children with any functional impairment, compared with receipt of no co-amoxiclav (624 [40.7%] of 1523 vs 608 [40.0%] of 1520; 1.03, 0.89-1.19). No effects were seen with either antibiotic on the number of deaths, other medical conditions, behavioural patterns, or educational attainment. However, more children whose mothers had received erythromycin or co-amoxiclav developed cerebral palsy than did those born to mothers who received no erythromycin or no co-amoxiclav, respectively (erythromycin: 53 [3.3%] of 1611 vs 27 [1.7%] of 1562, 1.93, 1.21-3.09; co-amoxiclav: 50 [3.2%] of 1587 vs 30 [1.9%] of 1586, 1.69, 1.07-2.67). The number needed to harm with erythromycin was 64 (95% CI 37-209) and with co-amoxiclav 79 (42-591). The prescription of erythromycin for women in spontaneous preterm labour with intact membranes was associated with an increase in functional impairment among their children at 7 years of age. The risk of cerebral palsy was increased by either antibiotic, although the overall risk of this condition was low. UK Medical Research Council.
The relational clinical database: a possible solution to the star wars in registry systems.
Michels, D K; Zamieroski, M
1990-12-01
In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.
48 CFR 52.204-7 - Central Contractor Registration.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (CCR) database means the primary Government repository for Contractor information required for the...) for the same concern. Registered in the CCR database means that— (1) The Contractor has entered all mandatory information, including the DUNS number or the DUNS+4 number, into the CCR database; and (2) The...
PubDNA Finder: a web database linking full-text articles to sequences of nucleic acids.
García-Remesal, Miguel; Cuevas, Alejandro; Pérez-Rey, David; Martín, Luis; Anguita, Alberto; de la Iglesia, Diana; de la Calle, Guillermo; Crespo, José; Maojo, Víctor
2010-11-01
PubDNA Finder is an online repository that we have created to link PubMed Central manuscripts to the sequences of nucleic acids appearing in them. It extends the search capabilities provided by PubMed Central by enabling researchers to perform advanced searches involving sequences of nucleic acids. This includes, among other features (i) searching for papers mentioning one or more specific sequences of nucleic acids and (ii) retrieving the genetic sequences appearing in different articles. These additional query capabilities are provided by a searchable index that we created by using the full text of the 176 672 papers available at PubMed Central at the time of writing and the sequences of nucleic acids appearing in them. To automatically extract the genetic sequences occurring in each paper, we used an original method we have developed. The database is updated monthly by automatically connecting to the PubMed Central FTP site to retrieve and index new manuscripts. Users can query the database via the web interface provided. PubDNA Finder can be freely accessed at http://servet.dia.fi.upm.es:8080/pubdnafinder
[Plug-in Based Centralized Control System in Operating Rooms].
Wang, Yunlong
2017-05-30
Centralized equipment controls in an operating room (OR) is crucial to an efficient workflow in the OR. To achieve centralized control, an integrative OR needs to focus on designing a control panel that can appropriately incorporate equipment from different manufactures with various connecting ports and controls. Here we propose to achieve equipment integration using plug-in modules. Each OR will be equipped with a dynamic plug-in control panel containing physically removable connecting ports. Matching outlets will be installed onto the control panels of each equipment used at any given time. This dynamic control panel will be backed with a database containing plug-in modules that can connect any two types of connecting ports common among medical equipment manufacturers. The correct connecting ports will be called using reflection dynamics. This database will be updated regularly to include new connecting ports on the market, making it easy to maintain, update, expand and remain relevant as new equipment are developed. Together, the physical panel and the database will achieve centralized equipment controls in the OR that can be easily adapted to any equipment in the OR.
Changes in Patterns of Teacher Interaction in Primary Classrooms: 1976-96.
ERIC Educational Resources Information Center
Galton, Maurice; Hargreaves, Linda; Comber, Chris; Wall, Debbie; Pell, Tony
1999-01-01
Addresses the effectiveness of teaching methods in English primary school classrooms. Evaluates the interventions designed to change primary educators' teaching methods by replicating the Observational Research and Classroom Learning Evaluation (ORACLE) study that was originally conducted in 1976. Compares the results from the original study to…
Change and Continuity in the Primary School: The Research Evidence.
ERIC Educational Resources Information Center
Galton, Maurice
1987-01-01
This article reviews findings from a 1975 through 1980 study called ORACLE (Observational Research and Classroom Learning Evaluation). Maintains that the data showed only partial implementation of the Plowden report recommendations. Seeks to explain the reasons for inconsistencies in implementation and offers suggestions for redefining progressive…
ERIC Educational Resources Information Center
Richardson, Ian M.
1990-01-01
A possible syllabus for English for Science and Technology is suggested based upon a set of causal relations, arising from a logical description of the presuppositional rhetoric of scientific passages that underlie most semantic functions. An empirical study is reported of the semantic functions present in 52 randomly selected passages.…
Oracle or Monacle: Research Concerning Attitudes Toward Feminism.
ERIC Educational Resources Information Center
Prescott, Suzanne; Schmid, Margaret
Both popular studies and more serious empirical studies of attitudes toward feminism are reviewed beginning with Clifford Kirkpatrick's early empirical work and including the more recent empirical studies completed since 1970. The review examines the contents of items used to measure feminism, and the methodology and sampling used in studies, as…
ERIC Educational Resources Information Center
McLester, Susan
2005-01-01
For many teachers and students, it was ThinkQuest that gave them their first experiences with the Internet. A collaborative competition, created by Advanced Network & Services (now managed by the Oracle Education Foundation), ThinkQuest asked students to form teams and create Web sites that were designed to help other students learn something. In…
Computer-Assisted Learning in Language Arts
ERIC Educational Resources Information Center
Serwer, Blanche L.; Stolurow, Lawrence M.
1970-01-01
A description of computer program segments in the feasibility and development phase of Operationally Relevant Activities for Children's Language Experience (Project ORACLE); original form of this paper was prepared by Serwer for presentation to annual meeting of New England Research Association (1st, Boston College, June 5-6, 1969). (Authors/RD)
Oracle Academy: Four Success Stories Model the Competitive Edge of CTE
ERIC Educational Resources Information Center
Techniques: Connecting Education and Careers, 2004
2004-01-01
"Vocational Education's" transformation to "Career and Technical Education" (CTE) is clearly underway as evidenced by the increased number of advanced topics and knowledge depth being taught in courses today. Technology advances across all industry sectors impact all CTE departments--from Agricultural Science to Business and Marketing. Building…
Teletext--Prestel's Big Brother.
ERIC Educational Resources Information Center
Hughes, Geoffrey
Prestel, Oracle, and Ceefax are telephone based video text systems currently in use in Great Britain. Rather than being considered as competitors, they should be viewed as complementary media with separate functions based on their differences. All use home television sets to receive information in print, and all broadcast on spare TV lines in the…
Antibiotics in agroecosystems: state of the science “ARASOS” workshop summary
USDA-ARS?s Scientific Manuscript database
In August 2014, the Biosphere2 Conference Center in Oracle, Arizona was the site of a sunny three-day workshop on “Antibiotics in Agroecosystems: State of the Science.” Attended by 42 people, participants included scientists and others from academia, government agencies (including EPA and USDA), an...
A Pioneer in Online Education Tries a MOOC
ERIC Educational Resources Information Center
Kirschner, Ann
2012-01-01
Surely "massive open online course" (MOOC) has one of the ugliest acronyms of recent years, lacking the deliberate playfulness of Yahoo (Yet Another Hierarchical Officious Oracle) or the droll shoulder shrug suggested by the word "snafu" (Situation Normal, All Fouled Up). The author is not a complete neophyte to online…
America’s Heroes -- Preparing Our Vets for Civilian Employment
2012-04-16
business environment . Oracle’s Injured Veteran Job and Training Program offers on-the-job training for soldiers wounded in the Afghanistan and Iraq wars...Awards.aspx 10 ^ http://www.blackstone.com/news/news-views/ blackstone -portfolio-company- alliedbarton-supports-u.s.-veterans-and-reservists 11 ^ http
Risk and the Internet of Things: Damocles, Pythia, or Pandora?
Showell, Chris
2016-01-01
The Internet of Things holds great promise for healthcare, but also embodies a number of risks. This analysis suggests that the risks are as yet poorly delineated (having features in common with the oracle Pythia, and with Pandora and her box), and that adopting the precautionary principle is appropriate.
NASA Astrophysics Data System (ADS)
Liu, Kelly H.; Elsheikh, Ahmed; Lemnifi, Awad; Purevsuren, Uranbaigal; Ray, Melissa; Refayee, Hesham; Yang, Bin B.; Yu, Youqiang; Gao, Stephen S.
2014-05-01
We present a shear wave splitting (SWS) database for the western and central United States as part of a lasting effort to build a uniform SWS database for the entire North America. The SWS measurements were obtained by minimizing the energy on the transverse component of the PKS, SKKS, and SKS phases. Each of the individual measurements was visually checked to ensure quality. This version of the database contains 16,105 pairs of splitting parameters. The data used to generate the parameters were recorded by 1774 digital broadband seismic stations over the period of 1989-2012, and represented all the available data from both permanent and portable seismic networks archived at the Incorporated Research Institutions for Seismology Data Management Center in the area of 26.00°N to 50.00°N and 125.00°W to 90.00°W. About 10,000 pairs of the measurements were from the 1092 USArray Transportable Array stations. The results show that approximately 2/3 of the fast orientations are within 30° from the absolute plate motion (APM) direction of the North American plate, and most of the largest departures with the APM are located along the eastern boundary of the western US orogenic zone and in the central Great Basins. The splitting times observed in the western US are larger than, and those in the central US are comparable with the global average of 1.0 s. The uniform database has an unprecedented spatial coverage and can be used for various investigations of the structure and dynamics of the Earth.
Centralized Data Management in a Multicountry, Multisite Population-based Study.
Rahman, Qazi Sadeq-ur; Islam, Mohammad Shahidul; Hossain, Belal; Hossain, Tanvir; Connor, Nicholas E; Jaman, Md Jahiduj; Rahman, Md Mahmudur; Ahmed, A S M Nawshad Uddin; Ahmed, Imran; Ali, Murtaza; Moin, Syed Mamun Ibne; Mullany, Luke; Saha, Samir K; El Arifeen, Shams
2016-05-01
A centralized data management system was developed for data collection and processing for the Aetiology of Neonatal Infection in South Asia (ANISA) study. ANISA is a longitudinal cohort study involving neonatal infection surveillance and etiology detection in multiple sites in South Asia. The primary goal of designing such a system was to collect and store data from different sites in a standardized way to pool the data for analysis. We designed the data management system centrally and implemented it to enable data entry at individual sites. This system uses validation rules and audit that reduce errors. The study sites employ a dual data entry method to minimize keystroke errors. They upload collected data weekly to a central server via internet to create a pooled central database. Any inconsistent data identified in the central database are flagged and corrected after discussion with the relevant site. The ANISA Data Coordination Centre in Dhaka provides technical support for operations, maintenance and updating the data management system centrally. Password-protected login identifications and audit trails are maintained for the management system to ensure the integrity and safety of stored data. Centralized management of the ANISA database helps to use common data capture forms (DCFs), adapted to site-specific contextual requirements. DCFs and data entry interfaces allow on-site data entry. This reduces the workload as DCFs do not need to be shipped to a single location for entry. It also improves data quality as all collected data from ANISA goes through the same quality check and cleaning process.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-29
... Federal Acquisition Regulation; Updates to Contract Reporting and Central Contractor Registration AGENCIES... Procurement Data System (FPDS). Additionally, changes are proposed for the clauses requiring contractor registration in the Central Contractor Registration (CCR) database and DUNS number reporting. DATES: Interested...
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
ERIC Educational Resources Information Center
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan
2016-01-01
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…
Acupuncture for treating sciatica: a systematic review protocol
Qin, Zongshi; Liu, Xiaoxu; Yao, Qin; Zhai, Yanbing; Liu, Zhishun
2015-01-01
Introduction This systematic review aims to assess the effectiveness and safety of acupuncture for treating sciatica. Methods The following nine databases will be searched from their inception to 30 October 2014: MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials (CENTRAL), the Chinese Biomedical Literature Database (CBM), the Chinese Medical Current Content (CMCC), the Chinese Scientific Journal Database (VIP database), the Wan-Fang Database, the China National Knowledge Infrastructure (CNKI) and Citation Information by National Institute of Informatics (CiNii). Randomised controlled trials (RCTs) of acupuncture for sciatica in English, Chinese or Japanese without restriction of publication status will be included. Two researchers will independently undertake study selection, extraction of data and assessment of study quality. Meta-analysis will be conducted after screening of studies. Data will be analysed using risk ratio for dichotomous data, and standardised mean difference or weighted mean difference for continuous data. Dissemination This systematic review will be disseminated electronically through a peer-reviewed publication or conference presentations. Trial registration number PROSPERO CRD42014015001. PMID:25922105
Development of spatial scaling technique of forest health sample point information
NASA Astrophysics Data System (ADS)
Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.
2017-12-01
Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.
XML: James Webb Space Telescope Database Issues, Lessons, and Status
NASA Technical Reports Server (NTRS)
Detter, Ryan; Mooney, Michael; Fatig, Curtis
2003-01-01
This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.
ERIC Educational Resources Information Center
Rosenthal, Raymond, Ed.
Twenty-one critical essays on the ideas and works of Marshall McLuhan are offered in this review. In the course of the essays, McLuhan is characterized alternately as a genius, an extrapolator, and an oracle; a defender of the choice of choicelessness; a generalizer who rearranges, misinterprets, and misreads the facts to support his theses; an…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae
A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.
PROVIDE: A Pedagogical Reference Oracle for Virtual IntegrateD E-ducation
ERIC Educational Resources Information Center
Narasimhan, V. Lakshmi; Zhao, Shuxin; Liang, Hailong; Zhang, Shuangyi
2006-01-01
This paper presents an interactive educational environment for use over both "in situ" and distance-based modalities of teaching. Several technological issues relating to the design and development of the distributed virtual learning environment have also been raised. The PROVIDE framework proposed in this paper is a seamless distributed…
From the Oracles of the Temple of Janus: "Chicago Teachers Union v. Hudson."
ERIC Educational Resources Information Center
Vieira, Edwin, Jr.
1986-01-01
Examines "Chicago Teachers Union v. Hudson," a United States Supreme Court decision guaranteeing non-union government workers specific protections of procedural due process that certain educational and teacher unions had failed to recognize. Decries the "Hudson" decision for separating labor law from laws governing the rest of…
Medicine and the Silent Oracle: An Exercise in Uncertainty
ERIC Educational Resources Information Center
Belling, Catherine
2006-01-01
This article describes a simple in-class exercise in reading and writing that, by asking participants to write their own endings for a short narrative taken from the "Journal of the American Medical Association," prompts them to reflect on the problem of uncertainty in medicine and to apply the literary-critical techniques of close…
Pedagogy, Performance and Politics: Issues in the Study of Primary Education.
ERIC Educational Resources Information Center
Campbell, R. J.; Kyriakides, L.
2000-01-01
Presents an essay review of Inside the Primary Classroom, 20 Years On, which reports on a 1976 study of teacher-student interaction in English elementary classrooms (the ORACLE project) and discusses the next 20 years of elementary education in England, examining curriculum reform, educational policy, educational research, and related politics.…
Practical Considerations for Conducting Delphi Studies: The Oracle Enters a New Age.
ERIC Educational Resources Information Center
Eggers, Renee M.; Jones, Charles M.
1998-01-01
In addition to giving an overview of Delphi methodology and describing the methodology used by the researchers in two Delphi studies, the authors provide information about electronic communication in Delphi studies. Also provided are suggestions that can be used in a Delphi study involving any form of communication. (SLD)
Fundamental problems in provable security and cryptography.
Dent, Alexander W
2006-12-15
This paper examines methods for formally proving the security of cryptographic schemes. We show that, despite many years of active research and dozens of significant results, there are fundamental problems which have yet to be solved. We also present a new approach to one of the more controversial aspects of provable security, the random oracle model.
Oracle Bones and Mandarin Tones: Demystifying the Chinese Language.
ERIC Educational Resources Information Center
Michigan Univ., Ann Arbor. Project on East Asian Studies in Education.
This publication will provide secondary level students with a basic understanding of the development and structure of Chinese (guo yu) language characters. The authors believe that demystifying the language helps break many cultural barriers. The written language is a good place to begin because its pictographic nature is appealing and inspires…
Early Development of Demonstratives in Pre-Qin Chinese
ERIC Educational Resources Information Center
Deng, Lin
2011-01-01
This dissertation offers a new dynamic account of the evolution of the demonstrative system in pre-Qin Chinese based on a comprehensive linguistic analysis of the phonological, morphological, syntactic, semantic, and pragmatic aspects of demonstratives attested in two corpora of excavated texts, i.e. the oracle-bone inscriptions dated to the late…