Sample records for sql electronic resource

  1. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  2. Improved Information Retrieval Performance on SQL Database Using Data Adapter

    NASA Astrophysics Data System (ADS)

    Husni, M.; Djanali, S.; Ciptaningtyas, H. T.; Wicaksana, I. G. N. A.

    2018-02-01

    The NoSQL databases, short for Not Only SQL, are increasingly being used as the number of big data applications increases. Most systems still use relational databases (RDBs), but as the number of data increases each year, the system handles big data with NoSQL databases to analyze and access data more quickly. NoSQL emerged as a result of the exponential growth of the internet and the development of web applications. The query syntax in the NoSQL database differs from the SQL database, therefore requiring code changes in the application. Data adapter allow applications to not change their SQL query syntax. Data adapters provide methods that can synchronize SQL databases with NotSQL databases. In addition, the data adapter provides an interface which is application can access to run SQL queries. Hence, this research applied data adapter system to synchronize data between MySQL database and Apache HBase using direct access query approach, where system allows application to accept query while synchronization process in progress. From the test performed using data adapter, the results obtained that the data adapter can synchronize between SQL databases, MySQL, and NoSQL database, Apache HBase. This system spends the percentage of memory resources in the range of 40% to 60%, and the percentage of processor moving from 10% to 90%. In addition, from this system also obtained the performance of database NoSQL better than SQL database.

  3. Computing and Communications Infrastructure for Network-Centric Warfare: Exploiting COTS, Assuring Performance

    DTIC Science & Technology

    2004-06-01

    remote databases, has seen little vendor acceptance. Each database ( Oracle , DB2, MySQL , etc.) has its own client- server protocol. Therefore each...existing standards – SQL , X.500/LDAP, FTP, etc. • View information dissemination as selective replication – State-oriented vs . message-oriented...allowing the 8 application to start. The resource management system would serve as a broker to the resources, making sure that resources are not

  4. DOMe: A deduplication optimization method for the NewSQL database backups

    PubMed Central

    Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng

    2017-01-01

    Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307

  5. An improved resource management model based on MDS

    NASA Astrophysics Data System (ADS)

    Yuan, Man; Sun, Changying; Li, Pengfei; Sun, Yongdong; He, Rui

    2005-11-01

    GRID technology provides a kind of convenient method for managing GRID resources. This service is so-called monitoring, discovering service. This method is proposed by Globus Alliance, in this GRID environment, all kinds of resources, such as computational resources, storage resources and other resources can be organized by MDS specifications. However, this MDS is a theory framework, particularly, in a small world intranet, in the case of limit of resources, the MDS has its own limitation. Based on MDS, an improved light method for managing corporation computational resources and storage resources is proposed in intranet(IMDS). Firstly, in MDS, all kinds of resource description information is stored in LDAP, it is well known although LDAP is a light directory access protocol, in practice, programmers rarely master how to access and store resource information into LDAP store, in such way, it limits MDS to be used. So, in intranet, these resources' description information can be stored in RDBMS, programmers and users can access this information by standard SQL. Secondly, in MDS, how to monitor all kinds of resources in GRID is not transparent for programmers and users. In such way, it limits its application scope, in general, resource monitoring method base on SNMP is widely employed in intranet, therefore, a kind of resource monitoring method based on SNMP is integrated into MDS. Finally, all kinds of resources in the intranet can be described by XML, and all kinds of resources' description information is stored in RDBMS, such as MySql, and retrieved by standard SQL, dynamic information for all kinds of resources can be sent to resource storage by SNMP, A prototype resource description, monitoring is designed and implemented in intranet.

  6. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases

    PubMed Central

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-01-01

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174

  7. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-03-19

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.

  8. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. Development and validation of a structured query language implementation of the Elixhauser comorbidity index.

    PubMed

    Epstein, Richard H; Dexter, Franklin

    2017-07-01

    Comorbidity adjustment is often performed during outcomes and health care resource utilization research. Our goal was to develop an efficient algorithm in structured query language (SQL) to determine the Elixhauser comorbidity index. We wrote an SQL algorithm to calculate the Elixhauser comorbidities from Diagnosis Related Group and International Classification of Diseases (ICD) codes. Validation was by comparison to expected comorbidities from combinations of these codes and to the 2013 Nationwide Readmissions Database (NRD). The SQL algorithm matched perfectly with expected comorbidities for all combinations of ICD-9 or ICD-10, and Diagnosis Related Groups. Of 13 585 859 evaluable NRD records, the algorithm matched 100% of the listed comorbidities. Processing time was ∼0.05 ms/record. The SQL Elixhauser code was efficient and computationally identical to the SAS algorithm used for the NRD. This algorithm may be useful where preprocessing of large datasets in a relational database environment and comorbidity determination is desired before statistical analysis. A validated SQL procedure to calculate Elixhauser comorbidities and the van Walraven index from ICD-9 or ICD-10 discharge diagnosis codes has been published. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature

    PubMed Central

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry BF; Tipton, Keith F

    2007-01-01

    Background We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. Description The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at . The data are available for download as SQL and XML files via FTP. Conclusion ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List. PMID:17662133

  11. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.

    PubMed

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F

    2007-07-27

    We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.

  12. The Primary Care Electronic Library: RSS feeds using SNOMED-CT indexing for dynamic content delivery.

    PubMed

    Robinson, Judas; de Lusignan, Simon; Kostkova, Patty; Madge, Bruce; Marsh, A; Biniaris, C

    2006-01-01

    Rich Site Summary (RSS) feeds are a method for disseminating and syndicating the contents of a website using extensible mark-up language (XML). The Primary Care Electronic Library (PCEL) distributes recent additions to the site in the form of an RSS feed. When new resources are added to PCEL, they are manually assigned medical subject headings (MeSH terms), which are then automatically mapped to SNOMED-CT terms using the Unified Medical Language System (UMLS) Metathesaurus. The library is thus searchable using MeSH or SNOMED-CT. Our syndicate partner wished to have remote access to PCEL coronary heart disease (CHD) information resources based on SNOMED-CT search terms. To pilot the supply of relevant information resources in response to clinically coded requests, using RSS syndication for transmission between web servers. Our syndicate partner provided a list of CHD SNOMED-CT terms to its end-users, a list which was coded according to UMLS specifications. When the end-user requested relevant information resources, this request was relayed from our syndicate partner's web server to the PCEL web server. The relevant resources were retrieved from the PCEL MySQL database. This database is accessed using a server side scripting language (PHP), which enables the production of dynamic RSS feeds on the basis of Source Asserted Identifiers (CODEs) contained in UMLS. Retrieving resources using SNOMED-CT terms using syndication can be used to build a functioning application. The process from request to display of syndicated resources took less than one second. The results of the pilot illustrate that it is possible to exchange data between servers using RSS syndication. This method could be utilised dynamically to supply digital library resources to a clinical system with SNOMED-CT data used as the standard of reference.

  13. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data

    PubMed Central

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest. PMID:26958859

  14. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data.

    PubMed

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.

  15. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    NASA Astrophysics Data System (ADS)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.

  16. Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-05-01

    The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.

  17. An Improved Model for Operational Specification of the Electron Density Structure up to Geosynchronous Heights

    DTIC Science & Technology

    2010-07-01

    http://www.iono.noa.gr/ElectronDensity/EDProfile.php The web service has been developed with the following open source tools: a) PHP , for the... MySQL for the database, which was based on the enhancement of the DIAS database. Below we present some screen shots to demonstrate the functionality

  18. FCDD: A Database for Fruit Crops Diseases.

    PubMed

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  19. Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.

    PubMed

    Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S

    The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Improved oilfield GHG accounting using a global oilfield database

    NASA Astrophysics Data System (ADS)

    Roberts, S.; Brandt, A. R.; Masnadi, M.

    2016-12-01

    The definition of oil is shifting in considerable ways. Conventional oil resources are declining as oil sands, heavy oils, and others emerge. Technological advances mean that these unconventional hydrocarbons are now viable resources. Meanwhile, scientific evidence is mounting that climate change is occurring. The oil sector is responsible for 35% of global greenhouse gas (GHG) emissions, but the climate impacts of these new unconventional oils are not well understood. As such, the Oil Climate Index (OCI) project has been an international effort to evaluate the total life-cycle environmental GHG emissions of different oil fields globally. Over the course of the first and second phases of the project, 30 and 75 global oil fields have been investigated, respectively. The 75 fields account for about 25% of global oil production. For the third phase of the project, it is aimed to expand the OCI to contain closing to 100% of global oil production; leading to the analysis of 8000 fields. To accomplish this, a robust database system is required to handle and manipulate the data. Therefore, the integration of the data into the computer science language SQL (Structured Query Language) was performed. The implementation of SQL allows users to process the data more efficiently than would be possible by using the previously established program (Microsoft Excel). Next, a graphic user interface (gui) was implemented, in the computer science language of C#, in order to make the data interactive; enabling people to update the database without prior knowledge of SQL being necessary.

  1. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    PubMed

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  2. Operationalizing Semantic Medline for meeting the information needs at point of care.

    PubMed

    Rastegar-Mojarad, Majid; Li, Dingcheng; Liu, Hongfang

    2015-01-01

    Scientific literature is one of the popular resources for providing decision support at point of care. It is highly desirable to bring the most relevant literature to support the evidence-based clinical decision making process. Motivated by the recent advance in semantically enhanced information retrieval, we have developed a system, which aims to bring semantically enriched literature, Semantic Medline, to meet the information needs at point of care. This study reports our work towards operationalizing the system for real time use. We demonstrate that the migration of a relational database implementation to a NoSQL (Not only SQL) implementation significantly improves the performance and makes the use of Semantic Medline at point of care decision support possible.

  3. Operationalizing Semantic Medline for meeting the information needs at point of care

    PubMed Central

    Rastegar-Mojarad, Majid; Li, Dingcheng; Liu, Hongfang

    2015-01-01

    Scientific literature is one of the popular resources for providing decision support at point of care. It is highly desirable to bring the most relevant literature to support the evidence-based clinical decision making process. Motivated by the recent advance in semantically enhanced information retrieval, we have developed a system, which aims to bring semantically enriched literature, Semantic Medline, to meet the information needs at point of care. This study reports our work towards operationalizing the system for real time use. We demonstrate that the migration of a relational database implementation to a NoSQL (Not only SQL) implementation significantly improves the performance and makes the use of Semantic Medline at point of care decision support possible. PMID:26306259

  4. Mission and Assets Database

    NASA Technical Reports Server (NTRS)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  5. Research on high availability architecture of SQL and NoSQL

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao

    2017-03-01

    With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.

  6. Research on the technology of detecting the SQL injection attack and non-intrusive prevention in WEB system

    NASA Astrophysics Data System (ADS)

    Hu, Haibin

    2017-05-01

    Among numerous WEB security issues, SQL injection is the most notable and dangerous. In this study, characteristics and procedures of SQL injection are analyzed, and the method for detecting the SQL injection attack is illustrated. The defense resistance and remedy model of SQL injection attack is established from the perspective of non-intrusive SQL injection attack and defense. Moreover, the ability of resisting the SQL injection attack of the server has been comprehensively improved through the security strategies on operation system, IIS and database, etc.. Corresponding codes are realized. The method is well applied in the actual projects.

  7. Maintaining Multimedia Data in a Geospatial Database

    DTIC Science & Technology

    2012-09-01

    at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database... MySQL ................................................................................................14  B.  BENCHMARKING DATA RETRIEVED FROM TABLE

  8. New Directions in the NOAO Observing Proposal System

    NASA Astrophysics Data System (ADS)

    Gasson, David; Bell, Dave

    For the past eight years NOAO has been refining its on-line observing proposal system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form, or via the Gemini Phase I Tool. NOAO staff can use the system to do administrative tasks, scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available on-line, including the proposals themselves (in HTML, PDF and PostScript) and technical comments. Grades and TAC comments are entered and edited through web forms, and can be sorted and filtered according to specified criteria. Current developments include a move away from proprietary solutions, toward open standards such as SQL (in the form of the MySQL relational database system), Perl, PHP and XML.

  9. Development of a Web-based Glaucoma Registry at King Khaled Eye Specialist Hospital, Saudi Arabia: A Cost-Effective Methodology

    PubMed Central

    Zaman, Babar; Khandekar, Rajiv; Al Shahwan, Sami; Song, Jonathan; Al Jadaan, Ibrahim; Al Jiasim, Leyla; Owaydha, Ohood; Asghar, Nasira; Hijazi, Amar; Edward, Deepak P.

    2014-01-01

    In this brief communication, we present the steps used to establish a web-based congenital glaucoma registry at our institution. The contents of a case report form (CRF) were developed by a group of glaucoma subspecialists. Information Technology (IT) specialists used Lime Survey softwareTM to create an electronic CRF. A MY Structured Query Language (MySQL) server was used as a database with a virtual machine operating system. Two ophthalmologists and 2 IT specialists worked for 7 hours, and a biostatistician and a data registrar worked for 24 hours each to establish the electronic CRF. Using the CRF which was transferred to the Lime survey tool, and the MYSQL server application, data could be directly stored in spreadsheet programs that included Microsoft Excel, SPSS, and R-Language and queried in real-time. In a pilot test, clinical data from 80 patients with congenital glaucoma were entered into the registry and successful descriptive analysis and data entry validation was performed. A web-based disease registry was established in a short period of time in a cost-efficient manner using available resources and a team-based approach. PMID:24791112

  10. Development of a web-based glaucoma registry at King Khaled Eye Specialist Hospital, Saudi Arabia: a cost-effective methodology.

    PubMed

    Zaman, Babar; Khandekar, Rajiv; Al Shahwan, Sami; Song, Jonathan; Al Jadaan, Ibrahim; Al Jiasim, Leyla; Owaydha, Ohood; Asghar, Nasira; Hijazi, Amar; Edward, Deepak P

    2014-01-01

    In this brief communication, we present the steps used to establish a web-based congenital glaucoma registry at our institution. The contents of a case report form (CRF) were developed by a group of glaucoma subspecialists. Information Technology (IT) specialists used Lime Survey softwareTM to create an electronic CRF. A MY Structured Query Language (MySQL) server was used as a database with a virtual machine operating system. Two ophthalmologists and 2 IT specialists worked for 7 hours, and a biostatistician and a data registrar worked for 24 hours each to establish the electronic CRF. Using the CRF which was transferred to the Lime survey tool, and the MYSQL server application, data could be directly stored in spreadsheet programs that included Microsoft Excel, SPSS, and R-Language and queried in real-time. In a pilot test, clinical data from 80 patients with congenital glaucoma were entered into the registry and successful descriptive analysis and data entry validation was performed. A web-based disease registry was established in a short period of time in a cost-efficient manner using available resources and a team-based approach.

  11. Operational Exercise Integration Recommendations for DoD Cyber Ranges

    DTIC Science & Technology

    2015-08-05

    be the precision and recall of a security information and event management (SIEM) system ’s notifications of unauthorized access to that directory...network traffic, port scanning Deplete Resources TCP flooding, memory leak exploitation Injection Cross-site scripting attacks, SQL injection Deceptive...requirements for personnel development; tactics, techniques, and procedures (TTPs) devel- opment; and mission rehearsals . While unique in their own

  12. Looking toward the Future: A Case Study of Open Source Software in the Humanities

    ERIC Educational Resources Information Center

    Quamen, Harvey

    2006-01-01

    In this article Harvey Quamen examines how the philosophy of open source software might be of particular benefit to humanities scholars in the near future--particularly for academic journals with limited financial resources. To this end he provides a case study in which he describes his use of open source technology (MySQL database software and…

  13. Survey Software Evaluation

    DTIC Science & Technology

    2009-01-01

    Oracle 9i, 10g  MySQL  MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server  Windows 2000 Server (32 bit...WebStar (Mac OS X)  SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server  MS SQL Server  Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular

  14. An Optimization of the Basic School Military Occupational Skill Assignment Process

    DTIC Science & Technology

    2003-06-01

    Corps Intranet (NMCI)23 supports it. We evaluated the use of Microsoft’s SQL Server, but dismissed this after learning that TBS did not possess a SQL ...Server license or a qualified SQL Server administrator.24 SQL Server would have provided for additional security measures not available in MS...administrator. Although not has powerful as SQL Server, MS Access can handle the multi-user environment necessary for this system.25 The training

  15. Fall 2014 Data-Intensive Systems

    DTIC Science & Technology

    2014-10-29

    Oct 2014 © 2014 Carnegie Mellon University Big Data Systems NoSQL and horizontal scaling are changing architecture principles by creating...University Status LEAP4BD • Ready to pilot QuABase • Prototype is complete – covers 8 NoSQL /NewSQL implementations • Completing validation testing Big...machine learning to automate population of knowledge base • Initial focus on NoSQL /NewSQL technology domain • Extend to create knowledge bases in other

  16. Photo-z-SQL: Photometric redshift estimation framework

    NASA Astrophysics Data System (ADS)

    Beck, Róbert; Dobos, László; Budavári, Tamás; Szalay, Alexander S.; Csabai, István

    2017-04-01

    Photo-z-SQL is a flexible template-based photometric redshift estimation framework that can be seamlessly integrated into a SQL database (or DB) server and executed on demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and uses the computational capabilities of DB hardware. Photo-z-SQL performs both maximum likelihood and Bayesian estimation and handles inputs of variable photometric filter sets and corresponding broad-band magnitudes.

  17. Quantifying Uncertainty in Expert Judgment: Initial Results

    DTIC Science & Technology

    2013-03-01

    lines of source code were added in . ---------- C++ = 32%; JavaScript = 29%; XML = 15%; C = 7%; CSS = 7%; Java = 5%; Oth- er = 5% LOC = 927,266...much total effort in person years has been spent on this project? CMU/SEI-2013-TR-001 | 33 5 MySQL , the most popular Open Source SQL...as MySQL , Oracle, PostgreSQL, MS SQL Server, ODBC, or Interbase. Features include email reminders, iCal/vCal import/export, re- mote subscriptions

  18. Development of a Prototype Detailing Management System for the Civil Engineer Corps

    DTIC Science & Technology

    2002-09-01

    73 Figure 30. Associate Members To Billets SQL Statement...American Standard Code for Information Interchange EMPRS Electronic Military Personnel Record System VBA Visual Basic for Applications SDLC...capturing keystrokes or carrying out a series of actions when opening an Access database. In the place of macros, VBA should be used because of its

  19. STINGRAY: system for integrated genomic resources and analysis.

    PubMed

    Wagner, Glauber; Jardim, Rodrigo; Tschoeke, Diogo A; Loureiro, Daniel R; Ocaña, Kary A C S; Ribeiro, Antonio C B; Emmel, Vanessa E; Probst, Christian M; Pitaluga, André N; Grisard, Edmundo C; Cavalcanti, Maria C; Campos, Maria L M; Mattoso, Marta; Dávila, Alberto M R

    2014-03-07

    The STINGRAY system has been conceived to ease the tasks of integrating, analyzing, annotating and presenting genomic and expression data from Sanger and Next Generation Sequencing (NGS) platforms. STINGRAY includes: (a) a complete and integrated workflow (more than 20 bioinformatics tools) ranging from functional annotation to phylogeny; (b) a MySQL database schema, suitable for data integration and user access control; and (c) a user-friendly graphical web-based interface that makes the system intuitive, facilitating the tasks of data analysis and annotation. STINGRAY showed to be an easy to use and complete system for analyzing sequencing data. While both Sanger and NGS platforms are supported, the system could be faster using Sanger data, since the large NGS datasets could potentially slow down the MySQL database usage. STINGRAY is available at http://stingray.biowebdb.org and the open source code at http://sourceforge.net/projects/stingray-biowebdb/.

  20. STINGRAY: system for integrated genomic resources and analysis

    PubMed Central

    2014-01-01

    Background The STINGRAY system has been conceived to ease the tasks of integrating, analyzing, annotating and presenting genomic and expression data from Sanger and Next Generation Sequencing (NGS) platforms. Findings STINGRAY includes: (a) a complete and integrated workflow (more than 20 bioinformatics tools) ranging from functional annotation to phylogeny; (b) a MySQL database schema, suitable for data integration and user access control; and (c) a user-friendly graphical web-based interface that makes the system intuitive, facilitating the tasks of data analysis and annotation. Conclusion STINGRAY showed to be an easy to use and complete system for analyzing sequencing data. While both Sanger and NGS platforms are supported, the system could be faster using Sanger data, since the large NGS datasets could potentially slow down the MySQL database usage. STINGRAY is available at http://stingray.biowebdb.org and the open source code at http://sourceforge.net/projects/stingray-biowebdb/. PMID:24606808

  1. Comprehensive Routing Security Development and Deployment for the Internet

    DTIC Science & Technology

    2015-02-01

    feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for

  2. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  3. TabSQL: a MySQL tool to facilitate mapping user data to public databases.

    PubMed

    Xia, Xiao-Qin; McClelland, Michael; Wang, Yipeng

    2010-06-23

    With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data.

  4. TabSQL: a MySQL tool to facilitate mapping user data to public databases

    PubMed Central

    2010-01-01

    Background With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. Results We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. Conclusions TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data. PMID:20573251

  5. Analysis of Cloud-Based Database Systems

    DTIC Science & Technology

    2015-06-01

    EU) citizens under the Patriot Act [3]. Unforeseen virtualization bugs have caused wide-reaching outages [4], leaving customers helpless to assist...collected from SQL Server Profiler traces. We analyze the trace results captured from our test bed both before and after increasing system resources...cloud test- bed . A. DATA COLLECTION, PARSING, AND ORGANIZATION Once we finished collecting the trace data, we knew we needed to have as close a

  6. Experimental observation of spatial quantum noise reduction below the standard quantum limit with bright twin beams of light

    NASA Astrophysics Data System (ADS)

    Kumar, Ashok; Nunley, Hayden; Marino, Alberto

    2016-05-01

    Quantum noise reduction (QNR) below the standard quantum limit (SQL) has been a subject of interest for the past two to three decades due to its wide range of applications in quantum metrology and quantum information processing. To date, most of the attention has focused on the study of QNR in the temporal domain. However, many areas in quantum optics, specifically in quantum imaging, could benefit from QNR not only in the temporal domain but also in the spatial domain. With the use of a high quantum efficiency electron multiplier charge coupled device (EMCCD) camera, we have observed spatial QNR below the SQL in bright narrowband twin light beams generated through a four-wave mixing (FWM) process in hot rubidium atoms. Owing to momentum conservation in this process, the twin beams are momentum correlated. This leads to spatial quantum correlations and spatial QNR. Our preliminary results show a spatial QNR of over 2 dB with respect to the SQL. Unlike previous results on spatial QNR with faint and broadband photon pairs from parametric down conversion (PDC), we demonstrate spatial QNR with spectrally and spatially narrowband bright light beams. The results obtained will be useful for atom light interaction based quantum protocols and quantum imaging. Work supported by the W.M. Keck Foundation.

  7. Implementation of an interactive database interface utilizing HTML, PHP, JavaScript, and MySQL in support of water quality assessments in the Northeastern North Carolina Pasquotank Watershed

    NASA Astrophysics Data System (ADS)

    Guion, A., Jr.; Hodgkins, H.

    2015-12-01

    The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.

  8. Migration from relational to NoSQL database

    NASA Astrophysics Data System (ADS)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  9. NoSQL Data Store Technologies

    DTIC Science & Technology

    2014-09-01

    NoSQL Data Store Technologies John Klein, Software Engineering Institute Patrick Donohoe, Software Engineering Institute Neil Ernst...REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE NoSQL Data Store Technologies 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...distribute data 4. Data Replication – determines how a NoSQL database facilitates reliable, high performance data replication to build

  10. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL

    PubMed Central

    2010-01-01

    Background Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. Results A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. Conclusions CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL. PMID:20594316

  11. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL.

    PubMed

    Laukens, Kris; Hollunder, Jens; Dang, Thanh Hai; De Jaeger, Geert; Kuiper, Martin; Witters, Erwin; Verschoren, Alain; Van Leemput, Koenraad

    2010-07-01

    Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL.

  12. SQL-RAMS

    NASA Technical Reports Server (NTRS)

    Alfaro, Victor O.; Casey, Nancy J.

    2005-01-01

    SQL-RAMS (where "SQL" signifies Structured Query Language and "RAMS" signifies Rocketdyne Automated Management System) is a successor to the legacy version of RAMS -- a computer program used to manage all work, nonconformance, corrective action, and configuration management on rocket engines and ground support equipment at Stennis Space Center. The legacy version resided in the File-Maker Pro software system and was constructed in modules that could act as standalone programs. There was little or no integration among modules. Because of limitations on file-management capabilities in FileMaker Pro, and because of difficulty of integration of FileMaker Pro with other software systems for exchange of data using such industry standards as SQL, the legacy version of RAMS proved to be limited, and working to circumvent its limitations too time-consuming. In contrast, SQL-RAMS is an integrated SQL-server-based program that supports all data-exchange software industry standards. Whereas in the legacy version, it was necessary to access individual modules to gain insight into a particular workstatus document, SQL-RAMS provides access through a single-screen presentation of core modules. In addition, SQL-RAMS enables rapid and efficient filtering of displayed statuses by predefined categories and test numbers. SQL-RAMS is rich in functionality and encompasses significant improvements over the legacy system. It provides users the ability to perform many tasks, which in the past required administrator intervention. Additionally, many of the design limitations have been corrected, allowing for a robust application that is user centric.

  13. SQL-RAMS

    NASA Technical Reports Server (NTRS)

    Alfaro, Victor O.; Casey, Nancy J.

    2005-01-01

    SQL-RAMS (where "SQL" signifies Structured Query Language and "RAMS" signifies Rocketdyne Automated Management System) is a successor to the legacy version of RAMS a computer program used to manage all work, nonconformance, corrective action, and configuration management on rocket engines and ground support equipment at Stennis Space Center. The legacy version resided in the FileMaker Pro software system and was constructed in modules that could act as stand-alone programs. There was little or no integration among modules. Because of limitations on file-management capabilities in FileMaker Pro, and because of difficulty of integration of FileMaker Pro with other software systems for exchange of data using such industry standards as SQL, the legacy version of RAMS proved to be limited, and working to circumvent its limitations too time-consuming. In contrast, SQL-RAMS is an integrated SQL-server-based program that supports all data-exchange software industry standards. Whereas in the legacy version, it was necessary to access individual modules to gain insight to a particular work-status documents, SQL-RAMS provides access through a single-screen presentation of core modules. In addition, SQL-RAMS enable rapid and efficient filtering of displayed statuses by predefined categories and test numbers. SQL-RAMS is rich in functionality and encompasses significant improvements over the legacy system. It provides users the ability to perform many tasks which in the past required administrator intervention. Additionally many of the design limitations have been corrected allowing for a robust application that is user centric.

  14. Design and Analysis of a Model Reconfigurable Cyber-Exercise Laboratory (RCEL) for Information Assurance Education

    DTIC Science & Technology

    2004-03-01

    with MySQL . This choice was made because MySQL is open source. Any significant database engine such as Oracle or MS- SQL or even MS Access can be used...10 Figure 6. The DoD vs . Commercial Life Cycle...necessarily be interested in SCADA network security 13. MySQL (Database server) – This station represents a typical data server for a web page

  15. Analysis and Development of a Web-Enabled Planning and Scheduling Database Application

    DTIC Science & Technology

    2013-09-01

    establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of

  16. Petaminer: Using ROOT for efficient data storage in MySQL database

    NASA Astrophysics Data System (ADS)

    Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.

    2010-04-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  17. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    PubMed

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Knowledge Data Base for Amorphous Metals

    DTIC Science & Technology

    2007-07-26

    not programmatic, updates. Over 100 custom SQL statements that maintain the domain specific data are attached to the workflow entries in a generic...for the form by populating the SQL and run generation tables. Application data may be prepared in different ways for two steps that invoke the same form...run generation mode). There is a single table of SQL commands. Each record has a user-definable ID, the SQL code, and a comment. The run generation

  19. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    NASA Astrophysics Data System (ADS)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  20. Resource Public Key Infrastructure Extension

    DTIC Science & Technology

    2012-01-01

    tests for checking compliance with the RFC 3779 extensions that are used in the RPKI. These tests also were used to identify an error in the OPENSSL ...rsync, OpenSSL , Cryptlib, and MySQL/ODBC. We assume that the adversaries can exploit any publicly known vulnerability in this software. • Server...NULL, set FLAG_NOCHAIN in Ctemp, defer verification. T = P Use OpenSSL to verify certificate chain S using trust anchor T, checking signature and

  1. An OAIS-Based Hospital Information System on the Cloud: Analysis of a NoSQL Column-Oriented Approach.

    PubMed

    Celesti, Antonio; Fazio, Maria; Romano, Agata; Bramanti, Alessia; Bramanti, Placido; Villari, Massimo

    2018-05-01

    The Open Archive Information System (OAIS) is a reference model for organizing people and resources in a system, and it is already adopted in care centers and medical systems to efficiently manage clinical data, medical personnel, and patients. Archival storage systems are typically implemented using traditional relational database systems, but the relation-oriented technology strongly limits the efficiency in the management of huge amount of patients' clinical data, especially in emerging cloud-based, that are distributed. In this paper, we present an OAIS healthcare architecture useful to manage a huge amount of HL7 clinical documents in a scalable way. Specifically, it is based on a NoSQL column-oriented Data Base Management System deployed in the cloud, thus to benefit from a big tables and wide rows available over a virtual distributed infrastructure. We developed a prototype of the proposed architecture at the IRCCS, and we evaluated its efficiency in a real case of study.

  2. StreptomycesInforSys: A web-enabled information repository

    PubMed Central

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736

  3. StreptomycesInforSys: A web-enabled information repository.

    PubMed

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.

  4. Risk Assessment of the Naval Postgraduate School Gigabit Network

    DTIC Science & Technology

    2004-09-01

    Management Server (1) • Ras Server (1) • Remedy Server (1) • Samba Server(2) • SQL Servers (3) • Web Servers (3) • WINS Server (1) • Library...Server Bob Sharp INCA Windows 2000 Advanced Server NPGS Landesk SQL 2000 Alan Pires eagle Microsoft Windows 2000 Advanced Server EWS NPGS Landesk...Advanced Server Special Projects NPGS SQL Alan Pires MC01BDB Microsoft Windows 2000 Advanced Server Special Projects NPGS SQL 2000 Alan Pires

  5. Blind Seer: A Scalable Private DBMS

    DTIC Science & Technology

    2014-05-01

    searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower...than MySQL , although some queries are costlier). We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges...index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL

  6. Performance Evaluation of NoSQL Databases: A Case Study

    DTIC Science & Technology

    2015-02-01

    a centralized relational database. The customer decided to consider NoSQL technologies for two specific uses, namely:  the primary data store for...17 custom specific 6. FU NoSQL availab data mo arking of data g a specific wo sin benchmark f hmark for tran le workload de o publish meas their...The choice of a particular NoSQL database imposes a specific distributed software architecture and data model, and is a major determinant of the

  7. The designing and implementation of PE teaching information resource database based on broadband network

    NASA Astrophysics Data System (ADS)

    Wang, Jian

    2017-01-01

    In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.

  8. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  9. Using Distinct Sectors in Media Sampling and Full Media Analysis to Detect Presence of Documents from a Corpus

    DTIC Science & Technology

    2012-09-01

    relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software

  10. Global ISR: Toward a Comprehensive Defense Against Unauthorized Code Execution

    DTIC Science & Technology

    2010-10-01

    implementation using two of the most popular open- source servers: the Apache web server, and the MySQL database server. For Apache, we measure the effect that...utility ab. T o ta l T im e ( s e c ) 0 500 1000 1500 2000 2500 3000 Native Null ISR ISR−MP Fig. 3. The MySQL test-insert bench- mark measures...various SQL operations. The figure draws total execution time as reported by the benchmark utility. Finally, we benchmarked a MySQL database server using

  11. SQL/NF Translator for the Triton Nested Relational Database System

    DTIC Science & Technology

    1990-12-01

    18as., Ohio .. 9~~ ~~ 1 4- AFIT/GCE/ENG/90D-05 SQL/Nk1 TRANSLATOR FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Craig William Schnepf Captain...FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technnlogy... systems . The SQL/NF query language used for the nested relationil model is an extension of the popular relational model query language SQL. The query

  12. Flexible Decision Support in Device-Saturated Environments

    DTIC Science & Technology

    2003-10-01

    also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results

  13. Evaluating a NoSQL Alternative for Chilean Virtual Observatory Services

    NASA Astrophysics Data System (ADS)

    Antognini, J.; Araya, M.; Solar, M.; Valenzuela, C.; Lira, F.

    2015-09-01

    Currently, the standards and protocols for data access in the Virtual Observatory architecture (DAL) are generally implemented with relational databases based on SQL. In particular, the Astronomical Data Query Language (ADQL), language used by IVOA to represent queries to VO services, was created to satisfy the different data access protocols, such as Simple Cone Search. ADQL is based in SQL92, and has extra functionality implemented using PgSphere. An emergent alternative to SQL are the so called NoSQL databases, which can be classified in several categories such as Column, Document, Key-Value, Graph, Object, etc.; each one recommended for different scenarios. Within their notable characteristics we can find: schema-free, easy replication support, simple API, Big Data, etc. The Chilean Virtual Observatory (ChiVO) is developing a functional prototype based on the IVOA architecture, with the following relevant factors: Performance, Scalability, Flexibility, Complexity, and Functionality. Currently, it's very difficult to compare these factors, due to a lack of alternatives. The objective of this paper is to compare NoSQL alternatives with SQL through the implementation of a Web API REST that satisfies ChiVO's needs: a SESAME-style name resolver for the data from ALMA. Therefore, we propose a test scenario by configuring a NoSQL database with data from different sources and evaluating the feasibility of creating a Simple Cone Search service and its performance. This comparison will allow to pave the way for the application of Big Data databases in the Virtual Observatory.

  14. An Integrated Intranet and Dynamic Database Application for the Security Manager at Naval Postgraduate School

    DTIC Science & Technology

    2002-09-01

    Basic for Applications ( VBA ) 6.0 as macros may not be supported in 8 future versions of Access. Access 2000 offers Internet- related features for...security features from Microsoft’s SQL Server. [1] 3. System Requirements Access 2000 is a resource-intensive application as are all Office 2000...1] • Modules – Functions and procedures written in the Visual Basic for Applications ( VBA ) programming language. The capabilities of modules

  15. ViralEpi v1.0: a high-throughput spectrum of viral epigenomic methylation profiles from diverse diseases.

    PubMed

    Khan, Mohd Shoaib; Gupta, Amit Kumar; Kumar, Manoj

    2016-01-01

    To develop a computational resource for viral epigenomic methylation profiles from diverse diseases. Methylation patterns of Epstein-Barr virus and hepatitis B virus genomic regions are provided as web platform developed using open source Linux-Apache-MySQL-PHP (LAMP) bundle: programming and scripting languages, that is, HTML, JavaScript and PERL. A comprehensive and integrated web resource ViralEpi v1.0 is developed providing well-organized compendium of methylation events and statistical analysis associated with several diseases. Additionally, it also facilitates 'Viral EpiGenome Browser' for user-affable browsing experience using JavaScript-based JBrowse. This web resource would be helpful for research community engaged in studying epigenetic biomarkers for appropriate prognosis and diagnosis of diseases and its various stages.

  16. Urgent Virtual Machine Eviction with Enlightened Post-Copy

    DTIC Science & Technology

    2015-12-01

    memory is in use, almost all of which is by Memcached. MySQL : The VMs run MySQL 5.6, and the clients execute OLTPBenchmark [3] using the Twitter...workload with scale factor of 960. The VMs are each allocated 16 cores and 30 GB of memory, and MySQL is configured with a 16 GB buffer pool in memory. The...operation mix for 5 minutes as a warm-up. At the time of migration, MySQL uses approximately 17 GB of memory, and almost all of the 30 GB memory is

  17. How to maintain blood supply during computer network breakdown: a manual backup system.

    PubMed

    Zeiler, T; Slonka, J; Bürgi, H R; Kretschmer, V

    2000-12-01

    Electronic data management systems using computer network systems and client/server architecture are increasingly used in laboratories and transfusion services. Severe problems arise if there is no network access to the database server and critical functions are not available. We describe a manual backup system (MBS) developed to maintain the delivery of blood products to patients in a hospital transfusion service in case of a computer network breakdown. All data are kept on a central SQL database connected to peripheral workstations in a local area network (LAN). Request entry from wards is performed via machine-readable request forms containing self-adhesive specimen labels with barcodes for test tubes. Data entry occurs on-line by bidirectional automated systems or off-line manually. One of the workstations in the laboratory contains a second SQL database which is frequently and incrementally updated. This workstation is run as a stand-alone, read-only database if the central SQL database is not available. In case of a network breakdown, the time-graded MBS is launched. Patient data, requesting ward and ordered tests/requests, are photocopied through a template from the request forms on special MBS worksheets serving as laboratory journal for manual processing and result report (a copy is left in the laboratory). As soon as the network is running again the data from the off-line period are entered into the primary SQL server. The MBS was successfully used at several occasions. The documentation of a 90-min breakdown period is presented in detail. Additional work resulted from the copy work and the belated manual data entry after restoration of the system. There was no delay in issue of blood products or result reporting. The backup system described has been proven to be simple, quick and safe to maintain urgent blood supply and distribution of laboratory results in case of unexpected network breakdown.

  18. NoSQL: collection document and cloud by using a dynamic web query form

    NASA Astrophysics Data System (ADS)

    Abdalla, Hemn B.; Lin, Jinzhao; Li, Guoquan

    2015-07-01

    Mongo-DB (from "humongous") is an open-source document database and the leading NoSQL database. A NoSQL (Not Only SQL, next generation databases, being non-relational, deal, open-source and horizontally scalable) presenting a mechanism for storage and retrieval of documents. Previously, we stored and retrieved the data using the SQL queries. Here, we use the MonogoDB that means we are not utilizing the MySQL and SQL queries. Directly importing the documents into our Drives, retrieving the documents on that drive by not applying the SQL queries, using the IO BufferReader and Writer, BufferReader for importing our type of document files to my folder (Drive). For retrieving the document files, the usage is BufferWriter from the particular folder (or) Drive. In this sense, providing the security for those storing files for what purpose means if we store the documents in our local folder means all or views that file and modified that file. So preventing that file, we are furnishing the security. The original document files will be changed to another format like in this paper; Binary format is used. Our documents will be converting to the binary format after that direct storing in one of our folder, that time the storage space will provide the private key for accessing that file. Wherever any user tries to discover the Document files means that file data are in the binary format, the document's file owner simply views that original format using that personal key from receive the secret key from the cloud.

  19. Recommender System for Learning SQL Using Hints

    ERIC Educational Resources Information Center

    Lavbic, Dejan; Matek, Tadej; Zrnec, Aljaž

    2017-01-01

    Today's software industry requires individuals who are proficient in as many programming languages as possible. Structured query language (SQL), as an adopted standard, is no exception, as it is the most widely used query language to retrieve and manipulate data. However, the process of learning SQL turns out to be challenging. The need for a…

  20. Life in extra dimensions of database world or penetration of NoSQL in HEP community

    NASA Astrophysics Data System (ADS)

    Kuznetsov, V.; Evans, D.; Metson, S.

    2012-12-01

    The recent buzzword in IT world is NoSQL. Major players, such as Facebook, Yahoo, Google, etc. are widely adopted different “NoSQL” solutions for their needs. Horizontal scalability, flexible data model and management of big data volumes are only a few advantages of NoSQL. In CMS experiment we use several of them in production environment. Here, we present CMS projects based on NoSQL solutions, their strengths and weaknesses as well as our experience with those tools and their coexistence with standard RDBMS solutions in our applications.

  1. A Database Design for the Brazilian Air Force Military Personnel Control System.

    DTIC Science & Technology

    1987-06-01

    GIVEN A RECNum GET MOVING HISTORICAL". 77 SEL4 PlC X(70) VALUE ". 4. GIVEN A RECNUM GET NOMINATION HISTORICAL". 77 SEL5 PIC X(70) VALUE it 5. GIVEN A...WHERE - "°RECNUM = :RECNUM". 77 SQL-SEL3-LENGTH PIC S9999 VALUE 150 COMP. 77 SQL- SEL4 PIC X(150) VALUE "SELECT ABBREV,DTNOM,DTEXO,SITN FROM...NOMINATION WHERE RECNUM 77 SQL- SEL4 -LENGTH PIC S9999 VALUE 150 COMP. 77 SQL-SEL5 PIC X(150) VALUE "SELECT ABBREVDTDES,DTWAIVER,SITD FROM DESIG WHERE RECNUM It

  2. Database Migration for Command and Control

    DTIC Science & Technology

    2002-11-01

    Sql - proprietary JDP Private Area Air defense data Defended asset list Oracle 7.3.2 - Automated process (OLTP...TADIL warnings Oracle 7.3.2 Flat File - Discrete transaction with data upds - NRT response required Pull mission data Std SQL ...level execution data Oracle 7.3 User update External interfaces Auto/manual backup Messaging Proprietary replication (internally) SQL Web server

  3. Network Configuration of Oracle and Database Programming Using SQL

    NASA Technical Reports Server (NTRS)

    Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.

    2000-01-01

    A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).

  4. Teaching Case: Introduction to NoSQL in a Traditional Database Course

    ERIC Educational Resources Information Center

    Fowler, Brad; Godin, Joy; Geddy, Margaret

    2016-01-01

    Many organizations are dealing with the increasing demands of big data, so they are turning to NoSQL databases as their preferred system for handling the unique problems of capturing and storing massive amounts of data. Therefore, it is likely that employees in all sizes of organizations will encounter NoSQL databases. Thus, to be more job-ready,…

  5. Zero tolerance for incorrect data: Best practices in SQL transaction programming

    NASA Astrophysics Data System (ADS)

    Laiho, M.; Skourlas, C.; Dervos, D. A.

    2015-02-01

    DBMS products differ in the way they support even the basic SQL transaction services. In this paper, a framework of best practices in SQL transaction programming is given and discussed. The SQL developers are advised to experiment with and verify the services supported by the DBMS product used. The framework has been developed by DBTechNet, a European network of teachers, trainers and ICT professionals. A course module on SQL transactions, offered by the LLP "DBTech VET Teachers" programme, is also presented and discussed. Aims and objectives of the programme include the introduction of the topics and content of SQL transactions and concurrency control to HE/VET curricula and addressing the need for initial and continuous training on these topics to in-company trainers, VET teachers, and Higher Education students. An overview of the course module, its learning outcomes, the education and training (E&T) content, virtual database labs with hands-on self-practicing exercises, plus instructions for the teacher/trainer on the pedagogy and the usage of the course modules' content are briefly described. The main principle adopted is to "Learn by verifying in practice" and the transactions course motto is: "Zero Tolerance for Incorrect Data".

  6. A Fast Healthcare Interoperability Resources (FHIR) layer implemented over i2b2.

    PubMed

    Boussadi, Abdelali; Zapletal, Eric

    2017-08-14

    Standards and technical specifications have been developed to define how the information contained in Electronic Health Records (EHRs) should be structured, semantically described, and communicated. Current trends rely on differentiating the representation of data instances from the definition of clinical information models. The dual model approach, which combines a reference model (RM) and a clinical information model (CIM), sets in practice this software design pattern. The most recent initiative, proposed by HL7, is called Fast Health Interoperability Resources (FHIR). The aim of our study was to investigate the feasibility of applying the FHIR standard to modeling and exposing EHR data of the Georges Pompidou European Hospital (HEGP) integrating biology and the bedside (i2b2) clinical data warehouse (CDW). We implemented a FHIR server over i2b2 to expose EHR data in relation with five FHIR resources: DiagnosisReport, MedicationOrder, Patient, Encounter, and Medication. The architecture of the server combines a Data Access Object design pattern and FHIR resource providers, implemented using the Java HAPI FHIR API. Two types of queries were tested: query type #1 requests the server to display DiagnosticReport resources, for which the diagnosis code is equal to a given ICD-10 code. A total of 80 DiagnosticReport resources, corresponding to 36 patients, were displayed. Query type #2, requests the server to display MedicationOrder, for which the FHIR Medication identification code is equal to a given code expressed in a French coding system. A total of 503 MedicationOrder resources, corresponding to 290 patients, were displayed. Results were validated by manually comparing the results of each request to the results displayed by an ad-hoc SQL query. We showed the feasibility of implementing a Java layer over the i2b2 database model to expose data of the CDW as a set of FHIR resources. An important part of this work was the structural and semantic mapping between the i2b2 model and the FHIR RM. To accomplish this, developers must manually browse the specifications of the FHIR standard. Our source code is freely available and can be adapted for use in other i2b2 sites.

  7. [Development of a medical equipment support information system based on PDF portable document].

    PubMed

    Cheng, Jiangbo; Wang, Weidong

    2010-07-01

    According to the organizational structure and management system of the hospital medical engineering support, integrate medical engineering support workflow to ensure the medical engineering data effectively, accurately and comprehensively collected and kept in electronic archives. Analyse workflow of the medical, equipment support work and record all work processes by the portable electronic document. Using XML middleware technology and SQL Server database, complete process management, data calculation, submission, storage and other functions. The practical application shows that the medical equipment support information system optimizes the existing work process, standardized and digital, automatic and efficient orderly and controllable. The medical equipment support information system based on portable electronic document can effectively optimize and improve hospital medical engineering support work, improve performance, reduce costs, and provide full and accurate digital data

  8. Methods to Secure Databases Against Vulnerabilities

    DTIC Science & Technology

    2015-12-01

    for several languages such as C, C++, PHP, Java and Python [16]. MySQL will work well with very large databases. The documentation references...using Eclipse and connected to each database management system using Python and Java drivers provided by MySQL , MongoDB, and Datastax (for Cassandra...tiers in Python and Java . Problem MySQL MongoDB Cassandra 1. Injection a. Tautologies Vulnerable Vulnerable Not Vulnerable b. Illegal query

  9. DESPIC: Detecting Early Signatures of Persuasion in Information Cascades

    DTIC Science & Technology

    2015-08-27

    over NoSQL Databases, Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). 26-MAY-14, . : , P...over NoSQL Databases. Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). Chicago, IL, USA...distributed NoSQL databases including HBase and Riak, we finalized the requirements of the optimal computational architecture to support our framework

  10. Ontology-based data integration between clinical and research systems.

    PubMed

    Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich; Bürkle, Thomas; Ganslandt, Thomas

    2015-01-01

    Data from the electronic medical record comprise numerous structured but uncoded elements, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of relevant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.

  11. NOAO observing proposal processing system

    NASA Astrophysics Data System (ADS)

    Bell, David J.; Gasson, David; Hartman, Mia

    2002-12-01

    Since going electronic in 1994, NOAO has continued to refine and enhance its observing proposal handling system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form or via Gemini's downloadable Phase-I Tool. NOAO staff can use online interfaces for administrative tasks, technical reviews, telescope scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available online. The system, now known as ANDES, is designed as a thin-client architecture (web pages are now used for almost all database functions) built using open source tools (FreeBSD, Apache, MySQL, Perl, PHP) to process descriptively-marked (LaTeX, XML) proposal documents.

  12. The Development and Preliminary Application Ofplant Quarantine Remote Teaching System Inchina

    NASA Astrophysics Data System (ADS)

    Wu, Zhigang; Li, Zhihong; Yang, Ding; Zhang, Guozhen

    With the development of modern information technology, the traditional teaching mode becomes more deficient for the requirement of modern education. Plant Quarantine has been accepted as the common course for the universities of agriculture in China after the entry of WTO. But the teaching resources of this course are not enough especially for most universities with lack base. The characteristic of e-learning is regarded as one way to solve the problem of short teaching resource. PQRTS (Plant Quarantine Remote Teaching System) was designed and developed with JSP (Java Sever Pages), MySQL and Tomcat in this study. The system included many kinds of plant quarantine teaching resources, such as international glossary, regulations and standards, multimedia information of quarantine process and pests, ppt files of teaching, and training exercise. The system prototype implemented the functions of remote learning, querying, management, examination and remote discussion. It could be a tool for teaching, teaching assistance and learning online.

  13. Development of a web application for water resources based on open source software

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.

    2014-01-01

    This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.

  14. Joint Battlespace Infosphere: Information Management Within a C2 Enterprise

    DTIC Science & Technology

    2005-06-01

    using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing

  15. Analyzing Enron Data: Bitmap Indexing Outperforms MySQL Queries bySeveral Orders of Magnitude

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stockinger, Kurt; Rotem, Doron; Shoshani, Arie

    2006-01-28

    FastBit is an efficient, compressed bitmap indexing technology that was developed in our group. In this report we evaluate the performance of MySQL and FastBit for analyzing the email traffic of the Enron dataset. The first finding shows that materializing the join results of several tables significantly improves the query performance. The second finding shows that FastBit outperforms MySQL by several orders of magnitude.

  16. Reactive Aggregate Model Protecting Against Real-Time Threats

    DTIC Science & Technology

    2014-09-01

    on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access

  17. Information Security Considerations for Applications Using Apache Accumulo

    DTIC Science & Technology

    2014-09-01

    Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java

  18. Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model

    DTIC Science & Technology

    2014-04-01

    time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest

  19. Learning Asset Technology Integration Support Tool Design Document

    DTIC Science & Technology

    2010-05-11

    language known as Hypertext Preprocessor ( PHP ) and by MySQL – a relational database management system that can also be used for content management. It...Requirements The LATIST tool will be implemented utilizing a WordPress platform with MySQL as the database. Also the LATIST system must effectively work... MySQL . When designing the LATIST system there are several considerations which must be accounted for in the working prototype. These include: • DAU

  20. Standard Port-Visit Cost Forecasting Model for U.S. Navy Husbanding Contracts

    DTIC Science & Technology

    2009-12-01

    Protocol (HTTP) server.35 2. MySQL . An open-source database.36 3. PHP . A common scripting language used for Web development.37 E. IMPLEMENTATION OF...Inc. (2009). MySQL Community Server (Version 5.1) [Software]. Available from http://dev.mysql.com/downloads/ 37 The PHP Group (2009). PHP (Version...Logistics Services MySQL My Structured Query Language NAVSUP Navy Supply Systems Command NC Non-Contract Items NPS Naval Postgraduate

  1. Cloud-Based Distributed Control of Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to

  2. SigmaCLIPSE = presentation management + NASA CLI PS + SQL

    NASA Technical Reports Server (NTRS)

    Weiss, Bernard P., Jr.

    1990-01-01

    SigmaCLIPSE provides an expert systems and 'intelligent' data base development program for diverse systems integration environments that require support for automated reasoning and expert systems technology, presentation management, and access to 'intelligent' SQL data bases. The SigmaCLIPSE technology and and its integrated ability to access 4th generation application development and decision support tools through a portable SQL interface, comprises a sophisticated software development environment for solving knowledge engineering and expert systems development problems in information intensive commercial environments -- financial services, health care, and distributed process control -- where the expert system must be extendable -- a major architectural advantage of NASA CLIPS. SigmaCLIPSE is a research effort intended to test the viability of merging SQL data bases with expert systems technology.

  3. An anaesthesia information management system as a tool for a quality assurance program: 10years of experience.

    PubMed

    Motamed, Cyrus; Bourgain, Jean Louis

    2016-06-01

    Anaesthesia Information Management Systems (AIMS) generate large amounts of data, which might be useful for quality assurance programs. This study was designed to highlight the multiple contributions of our AIMS system in extracting quality indicators over a period of 10years. The study was conducted from 2002 to 2011. Two methods were used to extract anaesthesia indicators: the manual extraction of individual files for monitoring neuromuscular relaxation and structured query language (SQL) extraction for other indicators which were postoperative nausea and vomiting (PONV), pain, sedation scores, pain-related medications, scores and postoperative hypothermia. For each indicator, a program of information/meetings and adaptation/suggestions for operating room and PACU personnel was initiated to improve quality assurance, while data were extracted each year. The study included 77,573 patients. The mean overall completeness of data for the initial years ranged from 55 to 85% and was indicator-dependent, which then improved to 95% completeness for the last 5years. The incidence of neuromuscular monitoring was initially 67% and then increased to 95% (P<0.05). The rate of pharmacological reversal remained around 53% throughout the study. Regarding SQL data, an improvement of severe postoperative pain and PONV scores was observed throughout the study, while mild postoperative hypothermia remained a challenge, despite efforts for improvement. The AIMS system permitted the follow-up of certain indicators through manual sampling and many more via SQL extraction in a sustained and non-time-consuming way across years. However, it requires competent and especially dedicated resources to handle the database. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  4. Using PHP/MySQL to Manage Potential Mass Impacts

    NASA Technical Reports Server (NTRS)

    Hager, Benjamin I.

    2010-01-01

    This paper presents a new application using commercially available software to manage mass properties for spaceflight vehicles. PHP/MySQL(PHP: Hypertext Preprocessor and My Structured Query Language) are a web scripting language and a database language commonly used in concert with each other. They open up new opportunities to develop cutting edge mass properties tools, and in particular, tools for the management of potential mass impacts (threats and opportunities). The paper begins by providing an overview of the functions and capabilities of PHP/MySQL. The focus of this paper is on how PHP/MySQL are being used to develop an advanced "web accessible" database system for identifying and managing mass impacts on NASA's Ares I Upper Stage program, managed by the Marshall Space Flight Center. To fully describe this application, examples of the data, search functions, and views are provided to promote, not only the function, but the security, ease of use, simplicity, and eye-appeal of this new application. This paper concludes with an overview of the other potential mass properties applications and tools that could be developed using PHP/MySQL. The premise behind this paper is that PHP/MySQL are software tools that are easy to use and readily available for the development of cutting edge mass properties applications. These tools are capable of providing "real-time" searching and status of an active database, automated report generation, and other capabilities to streamline and enhance mass properties management application. By using PHP/MySQL, proven existing methods for managing mass properties can be adapted to present-day information technology to accelerate mass properties data gathering, analysis, and reporting, allowing mass property management to keep pace with today's fast-pace design and development processes.

  5. Designing a Relational Database for the Basic School; Schools Command Web Enabled Officer and Enlisted Database (Sword)

    DTIC Science & Technology

    2002-06-01

    Student memo for personnel MCLLS . . . . . . . . . . . . . . 75 i. Migrate data to SQL Server...The Web Server is on the same server as the SWORD database in the current version. 4: results set 5: dynamic HTML page 6: dynamic HTML page 3: SQL ...still be supported by Access. SQL Server would be a more viable tool for a fully developed application based on the number of potential users and

  6. Multi-Resolution Playback of Network Trace Files

    DTIC Science & Technology

    2015-06-01

    a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3

  7. Supporting Social Data Observatory with Customizable Index Structures on HBase - Architecture and Performance

    DTIC Science & Technology

    2013-01-01

    commercial NoSQL database system. The results show that In-dexedHBase provides a data loading speed that is 6 times faster than Riak, and is...compare it with Riak, a widely adopted commercial NoSQL database system. The results show that In- dexedHBase provides a data loading speed that is 6...events. This chapter describes our research towards building an efficient and scalable storage platform for Truthy. Many existing NoSQL databases

  8. Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study

    DTIC Science & Technology

    2015-01-16

    evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and

  9. Novel Method of Storing and Reconstructing Events at Fermilab E-906/SeaQuest Using a MySQL Database

    NASA Astrophysics Data System (ADS)

    Hague, Tyler

    2010-11-01

    Fermilab E-906/SeaQuest is a fixed target experiment at Fermi National Accelerator Laboratory. We are investigating the antiquark asymmetry in the nucleon sea. By examining the ratio of the Drell- Yan cross sections of proton-proton and proton-deuterium collisions we can determine the asymmetry ratio. An essential feature in the development of the analysis software is to update the event reconstruction to modern software tools. We are doing this in a unique way by doing a majority of the calculations within an SQL database. Using a MySQL database allows us to take advantage of off-the-shelf software without sacrificing ROOT compatibility and avoid network bottlenecks with server-side data selection. Using our raw data we create stubs, or partial tracks, at each station which are pieced together to create full tracks. Our reconstruction process uses dynamically created SQL statements to analyze the data. These SQL statements create tables that contain the final reconstructed tracks as well as intermediate values. This poster will explain the reconstruction process and how it is being implemented.

  10. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  11. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    NASA Astrophysics Data System (ADS)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  12. The Mayak Worker Dosimetry System (MWDS-2013): Implementation of the Dose Calculations.

    PubMed

    Zhdanov, А; Vostrotin, V; Efimov, А; Birchall, A; Puncher, M

    2016-07-15

    The calculation of internal doses for the Mayak Worker Dosimetry System (MWDS-2013) involved extensive computational resources due to the complexity and sheer number of calculations required. The required output consisted of a set of 1000 hyper-realizations: each hyper-realization consists of a set (1 for each worker) of probability distributions of organ doses. This report describes the hardware components and computational approaches required to make the calculation tractable. Together with the software, this system is referred to here as the 'PANDORA system'. It is based on a commercial SQL server database in a series of six work stations. A complete run of the entire Mayak worker cohort entailed a huge amount of calculations in PANDORA and due to the relatively slow speed of writing the data into the SQL server, each run took about 47 days. Quality control was monitored by comparing doses calculated in PANDORA with those in a specially modified version of the commercial software 'IMBA Professional Plus'. Suggestions are also made for increasing calculation and storage efficiency for future dosimetry calculations using PANDORA. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Assembling proteomics data as a prerequisite for the analysis of large scale experiments

    PubMed Central

    Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleißner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R

    2009-01-01

    Background Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. Results In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. Conclusion The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk. PMID:19166578

  14. SpliceSeq: a resource for analysis and visualization of RNA-Seq data on alternative splicing and its functional impacts.

    PubMed

    Ryan, Michael C; Cleland, James; Kim, RyangGuk; Wong, Wing Chung; Weinstein, John N

    2012-09-15

    SpliceSeq is a resource for RNA-Seq data that provides a clear view of alternative splicing and identifies potential functional changes that result from splice variation. It displays intuitive visualizations and prioritized lists of results that highlight splicing events and their biological consequences. SpliceSeq unambiguously aligns reads to gene splice graphs, facilitating accurate analysis of large, complex transcript variants that cannot be adequately represented in other formats. SpliceSeq is freely available at http://bioinformatics.mdanderson.org/main/SpliceSeq:Overview. The application is a Java program that can be launched via a browser or installed locally. Local installation requires MySQL and Bowtie. mryan@insilico.us.com Supplementary data are available at Bioinformatics online.

  15. Software Re-Engineering of the Human Factors Analysis and Classification System - (Maintenance Extension) Using Object Oriented Methods in a Microsoft Environment

    DTIC Science & Technology

    2001-09-01

    replication) -- all from Visual Basic and VBA . In fact, we found that the SQL Server engine actually had a plethora of options, most formidable of...2002, the new SQL Server 2000 database engine, and Microsoft Visual Basic.NET. This thesis describes our use of the Spiral Development Model to...versions of Microsoft products? Specifically, the pending release of Microsoft Office 2002, the new SQL Server 2000 database engine, and Microsoft

  16. An XML-Based Knowledge Management System of Port Information for U.S. Coast Guard Cutters

    DTIC Science & Technology

    2003-03-01

    using DTDs was not chosen. XML Schema performs many of the same functions as SQL type schemas, but differ by the unique structure of XML documents...to access data from content files within the developed system. XPath is not equivalent to SQL . While XPath is very powerful at reaching into an XML...document and finding nodes or node sets, it is not a complete query language. For operations like joins, unions, intersections, etc., SQL is far

  17. Defense Information Systems Agency Technical Integration Support (DISA- TIS). MUMPS Study.

    DTIC Science & Technology

    1993-01-01

    usable in DoD, MUMPS must continue to improve in its support of DoD and OSE standards such as SQL , X-Windows, POSIX, PHIGS, etc. MUMPS and large AlSs...Language ( SQL ), X-Windows, and Graphical Kernel Services (GKS)) 2.2.2.3 FIPS Adoption by NIST The National Institute of Standards and Technology (NIST...many of the performance tuning mechanisms that must be performed explicitly with other systems. The VA looks forward to the SQL binding (1993 ANS) that

  18. User Manual for Personnel Inventory Aging and Promotion Model

    DTIC Science & Technology

    2009-06-01

    increased by 12. Now, an SQL 9 statement deletes records where [target] = NULL, and the model calculates the number of E8s that need to be promoted to...the run, the [Likelihood] and [Expected] tables are created. The first step in this process is to dy- namically build an SQL statement, based on the...This table has individual-level, longitudinal records. Next, a dy- namically built SQL statement based on the Number of Years, cre- ates a new data

  19. Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report

    DTIC Science & Technology

    2007-02-05

    34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about

  20. Spectrum Savings from High Performance Recording and Playback Onboard the Test Article

    DTIC Science & Technology

    2013-02-20

    execute within a Windows 7 environment, and data is recorded on SSDs. The underlying database is implemented using MySQL . Figure 1 illustrates the... MySQL database. This is effectively the time at which the recorded data are available for retransmission. CPU and Memory utilization were collected...17.7% MySQL avg. 3.9% EQDR Total avg. 21.6% Table 1 CPU Utilization with260 Mbits/sec Load The difference between the total System CPU (27.8

  1. Information Mining Technologies to Enable Discovery of Actionable Intelligence to Facilitate Maritime Situational Awareness: I-MINE

    DTIC Science & Technology

    2013-01-01

    website). Data mining tools are in-house code developed in Python, C++ and Java . • NGA The National Geospatial-Intelligence Agency (NGA) performs data...as PostgreSQL (with PostGIS), MySQL , Microsoft SQL Server, SQLite, etc. using the appropriate JDBC driver. 14 The documentation and ease to learn are...written in Java that is able to perform various types of regressions, classi- fications, and other data mining tasks. There is also a commercial version

  2. A Brief Assessment of LC2IEDM, MIST and Web Services for use in Naval Tactical Data Management

    DTIC Science & Technology

    2004-07-01

    server software, messaging between the client and server, and a database. The MIST database is implemented in an open source DBMS named PostGreSQL ... PostGreSQL had its beginnings at the University of California, Berkley, in 1986 [11]. The development of PostGreSQL has since evolved into a...contact history from the database. DRDC Atlantic TM 2004-148 9 Request Software Request Software Server Side Response from service

  3. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.

  4. Airport Remote Tower Sensor Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Gawdiak, Yuri; Leidichj, Christopher; Papasin, Richard; Tran, Peter B.; Bass, Kevin

    2006-01-01

    Networks of video cameras, meteorological sensors, and ancillary electronic equipment are under development in collaboration among NASA Ames Research Center, the Federal Aviation Administration (FAA), and the National Oceanic Atmospheric Administration (NOAA). These networks are to be established at and near airports to provide real-time information on local weather conditions that affect aircraft approaches and landings. The prototype network is an airport-approach-zone camera system (AAZCS), which has been deployed at San Francisco International Airport (SFO) and San Carlos Airport (SQL). The AAZCS includes remotely controlled color video cameras located on top of SFO and SQL air-traffic control towers. The cameras are controlled by the NOAA Center Weather Service Unit located at the Oakland Air Route Traffic Control Center and are accessible via a secure Web site. The AAZCS cameras can be zoomed and can be panned and tilted to cover a field of view 220 wide. The NOAA observer can see the sky condition as it is changing, thereby making possible a real-time evaluation of the conditions along the approach zones of SFO and SQL. The next-generation network, denoted a remote tower sensor system (RTSS), will soon be deployed at the Half Moon Bay Airport and a version of it will eventually be deployed at Los Angeles International Airport. In addition to remote control of video cameras via secure Web links, the RTSS offers realtime weather observations, remote sensing, portability, and a capability for deployment at remote and uninhabited sites. The RTSS can be used at airports that lack control towers, as well as at major airport hubs, to provide synthetic augmentation of vision for both local and remote operations under what would otherwise be conditions of low or even zero visibility.

  5. Decision support system for health care resources allocation

    PubMed Central

    Sebaa, Abderrazak; Nouicer, Amina; Tari, AbdelKamel; Tarik, Ramtani; Abdellah, Ouhab

    2017-01-01

    Background A study about healthcare resources can improve decisions regarding the allotment and mobilization of medical resources and to better guide future investment in the health sector. Aim The aim of this work was to design and implement a decision support system to improve medical resources allocation of Bejaia region. Methods To achieve the retrospective cohort study, we integrated existing clinical databases from different Bejaia department health sector institutions (an Algerian department) to collect information about patients from January 2015 through December 2015. Data integration was performed in a data warehouse using the multi-dimensional model and OLAP cube. During implementation, we used Microsoft SQL server 2012 and Microsoft Excel 2010. Results A medical decision support platform was introduced, and was implemented during the planning stages allowing the management of different medical orientations, it provides better apportionment and allotment of medical resources, and ensures that the allocation of health care resources has optimal effects on improving health. Conclusion In this study, we designed and implemented a decision support system which would improve health care in Bejaia department to especially assist in the selection of the optimum location of health center and hospital, the specialty of the health center, the medical equipment and the medical staff. PMID:28848645

  6. Decision support system for health care resources allocation.

    PubMed

    Sebaa, Abderrazak; Nouicer, Amina; Tari, AbdelKamel; Tarik, Ramtani; Abdellah, Ouhab

    2017-06-01

    A study about healthcare resources can improve decisions regarding the allotment and mobilization of medical resources and to better guide future investment in the health sector. The aim of this work was to design and implement a decision support system to improve medical resources allocation of Bejaia region. To achieve the retrospective cohort study, we integrated existing clinical databases from different Bejaia department health sector institutions (an Algerian department) to collect information about patients from January 2015 through December 2015. Data integration was performed in a data warehouse using the multi-dimensional model and OLAP cube. During implementation, we used Microsoft SQL server 2012 and Microsoft Excel 2010. A medical decision support platform was introduced, and was implemented during the planning stages allowing the management of different medical orientations, it provides better apportionment and allotment of medical resources, and ensures that the allocation of health care resources has optimal effects on improving health. In this study, we designed and implemented a decision support system which would improve health care in Bejaia department to especially assist in the selection of the optimum location of health center and hospital, the specialty of the health center, the medical equipment and the medical staff.

  7. An Automated Web Diary System for TeleHomeCare Patient Monitoring

    PubMed Central

    Ganzinger, Matthias; Demiris, George; Finkelstein, Stanley M.; Speedie, Stuart; Lundgren, Jan Marie

    2001-01-01

    The TeleHomeCare project monitors home care patients via the Internet. Each patient has a personalized homepage with an electronic diary for collecting the monitoring data with HTML forms. The web pages are generated dynamically using PHP. All data are stored in a MySQL database. Data are checked immediately by the system; if a value exceeds a predefined limit an alarm message is generated and sent automatically to the patient's case manager. Weekly graphical reports (PDF format) are also generated and sent by email to the same destination.

  8. Assessment & Commitment Tracking System (ACTS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryant, Robert A.; Childs, Teresa A.; Miller, Michael A.

    2004-12-20

    The ACTS computer code provides a centralized tool for planning and scheduling assessments, tracking and managing actions associated with assessments or that result from an event or condition, and "mining" data for reporting and analyzing information for improving performance. The ACTS application is designed to work with the MS SQL database management system. All database interfaces are written in SQL. The following software is used to develop and support the ACTS application: Cold Fusion HTML JavaScript Quest TOAD Microsoft Visual Source Safe (VSS) HTML Mailer for sending email Microsoft SQL Microsoft Internet Information Server

  9. Introduction to SQL. Ch. 1

    NASA Technical Reports Server (NTRS)

    McGlynn, T.; Santisteban, M.

    2007-01-01

    This chapter provides a very brief introduction to the Structured Query Language (SQL) for getting information from relational databases. We make no pretense that this is a complete or comprehensive discussion of SQL. There are many aspects of the language the will be completely ignored in the presentation. The goal here is to provide enough background so that users understand the basic concepts involved in building and using relational databases. We also go through the steps involved in building a particular astronomical database used in some of the other presentations in this volume.

  10. Preventing SQL Code Injection by Combining Static and Runtime Analysis

    DTIC Science & Technology

    2008-05-01

    attacker changes the developer’s intended structure of an SQ L com- mand by inserting new SQ L keywords or operators. (Su and Wasser - mann provide a...FROM b o o k s WHERE a u t h o r = ’ ’ GROUP BY r a t i n g We use symbol as a placeholder for the indeterminate part of the command (in this...dialects of SQL.) In our model, we mark transitions that correspond to externally defined strings with the symbol . To illustrate, Figure 2 shows the SQL

  11. A method to implement fine-grained access control for personal health records through standard relational database queries.

    PubMed

    Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley

    2010-10-01

    Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Ontology-Based Data Integration between Clinical and Research Systems

    PubMed Central

    Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich

    2015-01-01

    Data from the electronic medical record comprise numerous structured but uncoded ele-ments, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of rele-vant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it. PMID:25588043

  13. Integrated Array/Metadata Analytics

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  14. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  15. Integrating Scientific Array Processing into Standard SQL

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  16. Interactive DataBase of Cosmic Ray Anisotropy (DB A10)

    NASA Astrophysics Data System (ADS)

    Asipenka, A.S.; Belov, A.V.; Eroshenko, E.F.; Klepach, E.G.; Oleneva, V.A.; Yake, V.G.

    Data on the hourly means of cosmic ray density and anisotropy derived by the GSM method over the 1957-2006 are introduced in to MySQL database. This format allowed an access to data both in local and in the Internet. Using the realized combination of script-language Php and My SQL database the Internet project was created on the access for users data on the CR anisotropy in different formats (http://cr20.izmiran.ru/AnisotropyCR/main.htm/). Usage the sheaf Php and MySQL provides fast receiving data even in the Internet since a request and following process of data are accomplished on the project server. Usage of MySQL basis for the storing data on cosmic ray variations give a possibility to construct requests of different structures, extends the variety of data reflection, makes it possible the conformity data to other systems and usage them in other projects.

  17. SpliceSeq: a resource for analysis and visualization of RNA-Seq data on alternative splicing and its functional impacts

    PubMed Central

    Ryan, Michael C.; Cleland, James; Kim, RyangGuk; Wong, Wing Chung; Weinstein, John N.

    2012-01-01

    Summary: SpliceSeq is a resource for RNA-Seq data that provides a clear view of alternative splicing and identifies potential functional changes that result from splice variation. It displays intuitive visualizations and prioritized lists of results that highlight splicing events and their biological consequences. SpliceSeq unambiguously aligns reads to gene splice graphs, facilitating accurate analysis of large, complex transcript variants that cannot be adequately represented in other formats. Availability and implementation: SpliceSeq is freely available at http://bioinformatics.mdanderson.org/main/SpliceSeq:Overview. The application is a Java program that can be launched via a browser or installed locally. Local installation requires MySQL and Bowtie. Contact: mryan@insilico.us.com Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22820202

  18. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  19. Application of SQL database to the control system of MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Tomohiro; Omata, Koji; Konishi, Masahiro; Ichikawa, Takashi; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka Katsuno; Nishimura, Tetsuo

    2006-06-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru telescope. In order to perform observations of near-infrared imaging and spectroscopy with cold slit mask, MOIRCS contains many device components, which are distributed on an Ethernet LAN. Two PCs wired to the focal plane array electronics operate two HAWAII2 detectors, respectively, and other two PCs are used for integrated control and quick data reduction, respectively. Though most of the devices (e.g., filter and grism turrets, slit exchange mechanism for spectroscopy) are controlled via RS232C interface, they are accessible from TCP/IP connection using TCP/IP to RS232C converters. Moreover, other devices are also connected to the Ethernet LAN. This network distributed structure provides flexibility of hardware configuration. We have constructed an integrated control system for such network distributed hardwares, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS has also network distributed software design, applying TCP/IP socket communication to interprocess communication. In order to help the communication between the device interfaces and the user interfaces, we defined three layers in T-LECS; an external layer for user interface applications, an internal layer for device interface applications, and a communication layer, which connects two layers above. In the communication layer, we store the data of the system to an SQL database server; they are status data, FITS header data, and also meta data such as device configuration data and FITS configuration data. We present our software system design and the database schema to manage observations of MOIRCS with Subaru.

  20. SQL Triggers Reacting on Time Events: An Extension Proposal

    NASA Astrophysics Data System (ADS)

    Behrend, Andreas; Dorau, Christian; Manthey, Rainer

    Being able to activate triggers at timepoints reached or after time intervals elapsed has been acknowledged by many authors as a valuable functionality of a DBMS. Recently, the interest in time-based triggers has been renewed in the context of data stream monitoring. However, up till now SQL triggers react to data changes only, even though research proposals and prototypes have been supporting several other event types, in particular time-based ones, since long. We therefore propose a seamless extension of the SQL trigger concept by time-based triggers, focussing on semantic issues arising from such an extension.

  1. A THIRD OF PATIENTS TREATED AT A TERTIARY LEVEL SURGICAL SERVICE COULD BE TREATED AT A SECONDARY LEVEL FACILITY.

    PubMed

    Van Straten, S K; Stannard, C J; Bulabula, J; Paul, K; Leong, J; Klipin, M

    2017-06-01

    South Africa has an overburdened public healthcare system. Some admissions to Charlotte Maxeke Johannesburg Academic Hospital (CMJAH) may not require tertiary care. The numbers and details thereof are uncertain. Clinical research is limited by skills and access to data. A retrospective analysis of Electronic Discharge (ED) summaries from the Department of Surgery at CMJAH between 01 April 2015 and 01 April 2016. An SQL-query of the database generated a .csv file of all discharges with the fields database reference number, length of stay and level of care. The details and level of care of each record were verified by MBBCh 5 medical students using a defined level of care template with review of the full discharge summary. The data was reviewed by a senior clinician. There were 3007 discharge summaries, 97 were not classifiable, two were test records and one was a duplicate. These 100 records were excluded. There were no primary level records. Secondary level patients represented 29% (854) of patients discharged and 19% of total bed days. Tertiary and quaternary together represented 71% of the total patients and 81% of bed days. The average length of stay was 4.31 days for secondary, 6.98 days for tertiary and 9.77 days for quaternary level of care allocation. Almost a third (29%) of patients discharged from CMJAH Department of Surgery were deemed suitable for secondary level care. These admissions have a shorter length of stay and comprise 19% of total bed days. Students and electronic databases are useful research resources.

  2. Advancing Future Network Science through Content Understanding

    DTIC Science & Technology

    2014-05-01

    BitTorrent, PostgreSQL, MySQL , and GRSecurity) and emerging technologies (HadoopDFS, Tokutera, Sector/Sphere, HBase, and other BigTable-like...result. • Multi-Source Network Pulse Analyzer and Correlator provides course of action planning by enhancing the understanding of the complex dynamics

  3. Streamlining the Process of Acquiring Secure Open Architecture Software Systems

    DTIC Science & Technology

    2013-10-08

    Microsoft.NET, Enterprise Java Beans, GNU Lesser General Public License (LGPL) libraries, and data communication protocols like the Hypertext Transfer...NetBeans development environments),  customer relationship management (SugarCRM),  database management systems (PostgreSQL, MySQL ),  operating

  4. Trends and New Directions in Software Architecture

    DTIC Science & Technology

    2014-10-10

    frameworks  Open source  Cloud strategies  NoSQL  Machine Learning  MDD  Incremental approaches  Dashboards  Distributed development...complexity grows  NoSQL Models are not created equal 2014 Our Current Research  Lightweight Evaluation and Architecture Prototyping for Big Data

  5. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  6. Exploring Large Scale Data Analysis and Visualization for ARM Data Discovery Using NoSQL Technologies

    NASA Astrophysics Data System (ADS)

    Krishna, B.; Gustafson, W. I., Jr.; Vogelmann, A. M.; Toto, T.; Devarakonda, R.; Palanisamy, G.

    2016-12-01

    This paper presents a new way of providing ARM data discovery through data analysis and visualization services. ARM stands for Atmospheric Radiation Measurement. This Program was created to study cloud formation processes and their influence on radiative transfer and also include additional measurements of aerosol and precipitation at various highly instrumented ground and mobile stations. The total volume of ARM data is roughly 900TB. The current search for ARM data is performed by using its metadata, such as the site name, instrument name, date, etc. NoSQL technologies were explored to improve the capabilities of data searching, not only by their metadata, but also by using the measurement values. Two technologies that are currently being implemented for testing are Apache Cassandra (noSQL database) and Apache Spark (noSQL based analytics framework). Both of these technologies were developed to work in a distributed environment and hence can handle large data for storing and analytics. D3.js is a JavaScript library that can generate interactive data visualizations in web browsers by making use of commonly used SVG, HTML5, and CSS standards. To test the performance of NoSQL for ARM data, we will be using ARM's popular measurements to locate the data based on its value. Recently noSQL technology has been applied to a pilot project called LASSO, which stands for LES ARM Symbiotic Simulation and Observation Workflow. LASSO will be packaging LES output and observations in "data bundles" and analyses will require the ability for users to analyze both observations and LES model output either individually or together across multiple time periods. The LASSO implementation strategy suggests that enormous data storage is required to store the above mentioned quantities. Thus noSQL was used to provide a powerful means to store portions of the data that provided users with search capabilities on each simulation's traits through a web application. Based on the user selection, plots are created dynamically along with ancillary information that enables the user to locate and download data that fulfilled their required traits.

  7. An End-to-End System to Enable Quick, Easy and Inexpensive Deployment of Hydrometeorological Stations

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Piasecki, M.

    2014-12-01

    The high cost of hydro-meteorological data acquisition, communication and publication systems along with limited qualified human resources is considered as the main reason why hydro-meteorological data collection remains a challenge especially in developing countries. Despite significant advances in sensor network technologies which gave birth to open hardware and software, low-cost (less than $50) and low-power (in the order of a few miliWatts) sensor platforms in the last two decades, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome, and thus expensive task. These factors give rise for the need to develop a affordable, simple to deploy, scalable and self-organizing end-to-end (from sensor to publication) system suitable for deployment in such countries. The design of the envisioned system will consist of a few Sensed-And-Programmed Arduino-based sensor nodes with low-cost sensors measuring parameters relevant to hydrological processes and a Raspberry Pi micro-computer hosting the in-the-field back-end data management. This latter comprises the Python/Django model of the CUAHSI Observations Data Model (ODM) namely DjangODM backed by a PostgreSQL Database Server. We are also developing a Python-based data processing script which will be paired with the data autoloading capability of Django to populate the DjangODM database with the incoming data. To publish the data, the WOFpy (WaterOneFlow Web Services in Python) developed by the Texas Water Development Board for 'Water Data for Texas' which can produce WaterML web services from a variety of back-end database installations such as SQLite, MySQL, and PostgreSQL will be used. A step further would be the development of an appealing online visualization tool using Python statistics and analytics tools (Scipy, Numpy, Pandas) showing the spatial distribution of variables across an entire watershed as a time variant layer on top of a basemap.

  8. CMS users data management service integration and first experiences with its NoSQL data storage

    NASA Astrophysics Data System (ADS)

    Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.

    2014-06-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  9. A searching and reporting system for relational databases using a graph-based metadata representation.

    PubMed

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  10. A Web-based Examination System Based on PHP+MySQL.

    PubMed

    Wen, Ji; Zhang, Yang; Yan, Yong; Xia, Shunren

    2005-01-01

    The design and implementation of web-based examination system constructed by PHP and MySQL is presented in this paper. Three primary parts, including students',teachers' and administrators', are introduced and analyzed in detail. Initial application has demonstrated the system's feasibility and reasonability.*

  11. Processing Solutions for Big Data in Astronomy

    NASA Astrophysics Data System (ADS)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  12. An effective model for store and retrieve big health data in cloud computing.

    PubMed

    Goli-Malekabadi, Zohreh; Sargolzaei-Javan, Morteza; Akbari, Mohammad Kazem

    2016-08-01

    The volume of healthcare data including different and variable text types, sounds, and images is increasing day to day. Therefore, the storage and processing of these data is a necessary and challenging issue. Generally, relational databases are used for storing health data which are not able to handle the massive and diverse nature of them. This study aimed at presenting the model based on NoSQL databases for the storage of healthcare data. Despite different types of NoSQL databases, document-based DBs were selected by a survey on the nature of health data. The presented model was implemented in the Cloud environment for accessing to the distribution properties. Then, the data were distributed on the database by applying the Shard property. The efficiency of the model was evaluated in comparison with the previous data model, Relational Database, considering query time, data preparation, flexibility, and extensibility parameters. The results showed that the presented model approximately performed the same as SQL Server for "read" query while it acted more efficiently than SQL Server for "write" query. Also, the performance of the presented model was better than SQL Server in the case of flexibility, data preparation and extensibility. Based on these observations, the proposed model was more effective than Relational Databases for handling health data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Evaluation of NoSQL databases for DIRAC monitoring and beyond

    NASA Astrophysics Data System (ADS)

    Mathe, Z.; Casajus Ramo, A.; Stagni, F.; Tomassetti, L.

    2015-12-01

    Nowadays, many database systems are available but they may not be optimized for storing time series data. Monitoring DIRAC jobs would be better done using a database optimised for storing time series data. So far it was done using a MySQL database, which is not well suited for such an application. Therefore alternatives have been investigated. Choosing an appropriate database for storing huge amounts of time series data is not trivial as one must take into account different aspects such as manageability, scalability and extensibility. We compared the performance of Elasticsearch, OpenTSDB (based on HBase) and InfluxDB NoSQL databases, using the same set of machines and the same data. We also evaluated the effort required for maintaining them. Using the LHCb Workload Management System (WMS), based on DIRAC as a use case we set up a new monitoring system, in parallel with the current MySQL system, and we stored the same data into the databases under test. We evaluated Grafana (for OpenTSDB) and Kibana (for ElasticSearch) metrics and graph editors for creating dashboards, in order to have a clear picture on the usability of each candidate. In this paper we present the results of this study and the performance of the selected technology. We also give an outlook of other potential applications of NoSQL databases within the DIRAC project.

  14. BioImaging Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Nix, Lisa Simirenko

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  15. The SQL Server Database for Non Computer Professional Teaching Reform

    ERIC Educational Resources Information Center

    Liu, Xiangwei

    2012-01-01

    A summary of the teaching methods of the non-computer professional SQL Server database, analyzes the current situation of the teaching course. According to non computer professional curriculum teaching characteristic, put forward some teaching reform methods, and put it into practice, improve the students' analysis ability, practice ability and…

  16. Implementing CBM: SQL-Tutor after Fifteen Years

    ERIC Educational Resources Information Center

    Mitrovic, Antonija; Ohlsson, Stellan

    2016-01-01

    SQL-Tutor is the first constraint-based tutor. The initial conference papers about the system were published in 1998 (Mitrovic 1998a, 1998b, 1998c), with an "IJAIED" paper published in 1999 (Mitrovic and Ohlsson, "International Journal Artificial Intelligence in Education," 10(3-4), 238-256, 1999). We published another…

  17. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.

  18. Depth-area-duration characteristics of storm rainfall in Texas using Multi-Sensor Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    McEnery, J. A.; Jitkajornwanich, K.

    2012-12-01

    This presentation will describe the methodology and overall system development by which a benchmark dataset of precipitation information has been used to characterize the depth-area-duration relations in heavy rain storms occurring over regions of Texas. Over the past two years project investigators along with the National Weather Service (NWS) West Gulf River Forecast Center (WGRFC) have developed and operated a gateway data system to ingest, store, and disseminate NWS multi-sensor precipitation estimates (MPE). As a pilot project of the Integrated Water Resources Science and Services (IWRSS) initiative, this testbed uses a Standard Query Language (SQL) server to maintain a full archive of current and historic MPE values within the WGRFC service area. These time series values are made available for public access as web services in the standard WaterML format. Having this volume of information maintained in a comprehensive database now allows the use of relational analysis capabilities within SQL to leverage these multi-sensor precipitation values and produce a valuable derivative product. The area of focus for this study is North Texas and will utilize values that originated from the West Gulf River Forecast Center (WGRFC); one of three River Forecast Centers currently represented in the holdings of this data system. Over the past two decades, NEXRAD radar has dramatically improved the ability to record rainfall. The resulting hourly MPE values, distributed over an approximate 4 km by 4 km grid, are considered by the NWS to be the "best estimate" of rainfall. The data server provides an accepted standard interface for internet access to the largest time-series dataset of NEXRAD based MPE values ever assembled. An automated script has been written to search and extract storms over the 18 year period of record from the contents of this massive historical precipitation database. Not only can it extract site-specific storms, but also duration-specific storms and storms separated by user defined inter-event periods. A separate storm database has been created to store the selected output. By storing output within tables in a separate database, we can make use of powerful SQL capabilities to perform flexible pattern analysis. Previous efforts have made use of historic data from limited clusters of irregularly spaced physical gauges. Spatial extent of the observational network has been a limiting factor. The relatively dense distribution of MPE provides a virtual mesh of observations stretched over the landscape. This work combines a unique hydrologic data resource with programming and database analysis to characterize storm depth-area-duration relationships.

  19. MiSNPDb: a web-based genomic resources of tropical ecology fruit mango (Mangifera indica L.) for phylogeography and varietal differentiation.

    PubMed

    Iquebal, M A; Jaiswal, Sarika; Mahato, Ajay Kumar; Jayaswal, Pawan K; Angadi, U B; Kumar, Neeraj; Sharma, Nimisha; Singh, Anand K; Srivastav, Manish; Prakash, Jai; Singh, S K; Khan, Kasim; Mishra, Rupesh K; Rajan, Shailendra; Bajpai, Anju; Sandhya, B S; Nischita, Puttaraju; Ravishankar, K V; Dinesh, M R; Rai, Anil; Kumar, Dinesh; Sharma, Tilak R; Singh, Nagendra K

    2017-11-02

    Mango is one of the most important fruits of tropical ecological region of the world, well known for its nutritive value, aroma and taste. Its world production is >45MT worth >200 billion US dollars. Genomic resources are required for improvement in productivity and management of mango germplasm. There is no web-based genomic resources available for mango. Hence rapid and cost-effective high throughput putative marker discovery is required to develop such resources. RAD-based marker discovery can cater this urgent need till whole genome sequence of mango becomes available. Using a panel of 84 mango varieties, a total of 28.6 Gb data was generated by ddRAD-Seq approach on Illumina HiSeq 2000 platform. A total of 1.25 million SNPs were discovered. Phylogenetic tree using 749 common SNPs across these varieties revealed three major lineages which was compared with geographical locations. A web genomic resources MiSNPDb, available at http://webtom.cabgrid.res.in/mangosnps/ is based on 3-tier architecture, developed using PHP, MySQL and Javascript. This web genomic resources can be of immense use in the development of high density linkage map, QTL discovery, varietal differentiation, traceability, genome finishing and SNP chip development for future GWAS in genomic selection program. We report here world's first web-based genomic resources for genetic improvement and germplasm management of mango.

  20. Building a Diabetes Registry from the Veterans Health Administration's Computerized Patient Record System

    PubMed Central

    F. O. Kern, Elizabeth; Beischel, Scott; Stalnaker, Randal; Aron, David C.; Kirsh, Susan R.; Watts, Sharon A.

    2008-01-01

    Background Little information is available describing how to implement a disease registry from an electronic patient record system. The aim of this report is to describe the technology, methods, and utility of a diabetes registry populated by the Veterans Health Information Systems Architecture (VistA), which underlies the computerized patient record system of the Veterans Health Administration (VHA) in Veteran Affairs Integrated Service Network 10 (VISN 10). Methods VISN 10 data from VistA were mapped to a relational SQL-based data system using KB_SQL software. Operational definitions for diabetes, active clinical management, and responsible providers were used to create views of patient-level data in the diabetes registry. Query Analyzer was used to access the data views directly. Semicustomizable reports were created by linking the diabetes registry to a Web page using Microsoft asp.net2. A retrospective observational study design was used to analyze trends in the process of care and outcomes. Results Since October 2001, 81,227 patients with diabetes have enrolled in VISN 10: approximately 42,000 are currently under active management by VISN 10 providers. By tracking primary care visits, we assigned 91% to a clinic group responsible for diabetes care. In the Cleveland Veterans Affairs Medical Center (VAMC), the frequency of mean annual hemoglobin A1c levels ≥9% has declined significantly over 5 years. Almost 4000 patients have been seen in diabetes intervention programs in the Cleveland VAMC over the past 4 years. Conclusions A diabetes registry can be populated from the database underlying the VHA electronic patient record database system and linked to Web-based and ad hoc queries useful for quality improvement. PMID:19885172

  1. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    PubMed

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  2. Agentless Cloud-Wide Monitoring of Virtual Disk State

    DTIC Science & Technology

    2015-10-01

    packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many

  3. Comparing IndexedHBase and Riak for Serving Truthy: Performance of Data Loading and Query Evaluation

    DTIC Science & Technology

    2013-08-01

    Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS performance evaluation, distributed database, noSQL , HBase, indexing Xiaoming Gao, Judy Qiu...common hashtags created during a given time window. With the purpose of finding a solution for these challenges, we evaluate NoSQL databases such as

  4. Open, Cross Platform Chemistry Application Unifying Structure Manipulation, External Tools, Databases and Visualization

    DTIC Science & Technology

    2012-11-27

    with powerful analysis tools and an informatics approach leveraging best-of-breed NoSQL databases, in order to store, search and retrieve relevant...dictionaries, and JavaScript also has good support. The MongoDB project[15] was chosen as a scalable NoSQL data store for the cheminfor- matics components

  5. An Experimental Investigation of Complexity in Database Query Formulation Tasks

    ERIC Educational Resources Information Center

    Casterella, Gretchen Irwin; Vijayasarathy, Leo

    2013-01-01

    Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…

  6. Knowledge Query Language (KQL)

    DTIC Science & Technology

    2016-02-12

    Lexington Massachusetts This page intentionally left blank. iii EXECUTIVE SUMMARY Currently, queries for data ...retrieval from non-Structured Query Language (NoSQL) data stores are tightly coupled to the specific implementation of the data store implementation...independent of the storage content and format for querying NoSQL or relational data stores. This approach uses address expressions (or A-Expressions

  7. Research of GIS-services applicability for solution of spatial analysis tasks.

    NASA Astrophysics Data System (ADS)

    Terekhin, D. A.; Botygin, I. A.; Sherstneva, A. I.; Sherstnev, V. S.

    2017-01-01

    Experiments for working out the areas of applying various gis-services in the tasks of spatial analysis are discussed in this paper. Google Maps, Yandex Maps, Microsoft SQL Server are used as services of spatial analysis. All services have shown a comparable speed of analyzing the spatial data when carrying out elemental spatial requests (building up the buffer zone of a point object) as well as the preferences of Microsoft SQL Server in operating with more complicated spatial requests. When building up elemental spatial requests, internet-services show higher efficiency due to cliental data handling with JavaScript-subprograms. A weak point of public internet-services is an impossibility to handle data on a server side and a barren variety of spatial analysis functions. Microsoft SQL Server offers a large variety of functions needed for spatial analysis on the server side. The authors conclude that when solving practical problems, the capabilities of internet-services used in building up routes and completing other functions with spatial analysis with Microsoft SQL Server should be involved.

  8. 2MASS Catalog Server Kit Version 2.1

    NASA Astrophysics Data System (ADS)

    Yamauchi, C.

    2013-10-01

    The 2MASS Catalog Server Kit is open source software for use in easily constructing a high performance search server for important astronomical catalogs. This software utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers by following step-by-step installation guide. The kit provides highly optimized stored functions for positional searchs similar to SDSS SkyServer. Together with these, the powerful SQL environment of PostgreSQL will meet various user's demands. We released 2MASS Catalog Server Kit version 2.1 in 2012 May, which supports the latest WISE All-Sky catalog (563,921,584 rows) and 9 major all-sky catalogs. Local databases are often indispensable for observatories with unstable or narrow-band networks or severe use, such as retrieving large numbers of records within a small period of time. This software is the best for such purposes, and increasing supported catalogs and improvements of version 2.1 can cover a wider range of applications including advanced calibration system, scientific studies using complicated SQL queries, etc. Official page: http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/

  9. The Analysis of Renewable Energy Researches in Turkey

    NASA Astrophysics Data System (ADS)

    Tan, S. O.; Toku, T.; Türker, İ.

    2016-11-01

    The rapid consumption of limited conventional energy resources mobilizes many countries in the world against global energy crisis. As well as the energy crisis, the environmental pollution caused by existing energy sources also encourages the researchers to study in new energy technologies and also renewable energy resources. From this point of view, it is important for each country to identify its wind, solar, geothermal, biomass, hydro and other renewable energy potentials. Considering this urgent energy requirement, the researches and especially the academic studies have been increased on renewable energy resources to meet the energy demand by means of indigenous resources in each country. Consequently, the main purpose of this study is to analyze the academic studies in Turkey to find out the increment rate of researches, their publication years and the more focusing branch on renewable energy by illustrating the statistical distribution of these data. Automated Data Retrieval Methods have been employed to achieve data from Web of Science database and statistical analyses have been made by SQL server management studio program. The academic studies in all variety of renewable energy areas have a tendency to increase which indicates the importance ratio of renewable energy in Turkey.

  10. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    PubMed

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  11. Use of Graph Database for the Integration of Heterogeneous Biological Data

    PubMed Central

    Yoon, Byoung-Ha; Kim, Seon-Kyu

    2017-01-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data. PMID:28416946

  12. Ensembl regulation resources

    PubMed Central

    Zerbino, Daniel R.; Johnson, Nathan; Juetteman, Thomas; Sheppard, Dan; Wilder, Steven P.; Lavidas, Ilias; Nuhn, Michael; Perry, Emily; Raffaillac-Desfosses, Quentin; Sobral, Daniel; Keefe, Damian; Gräf, Stefan; Ahmed, Ikhlak; Kinsella, Rhoda; Pritchard, Bethan; Brent, Simon; Amode, Ridwan; Parker, Anne; Trevanion, Steven; Birney, Ewan; Dunham, Ian; Flicek, Paul

    2016-01-01

    New experimental techniques in epigenomics allow researchers to assay a diversity of highly dynamic features such as histone marks, DNA modifications or chromatin structure. The study of their fluctuations should provide insights into gene expression regulation, cell differentiation and disease. The Ensembl project collects and maintains the Ensembl regulation data resources on epigenetic marks, transcription factor binding and DNA methylation for human and mouse, as well as microarray probe mappings and annotations for a variety of chordate genomes. From this data, we produce a functional annotation of the regulatory elements along the human and mouse genomes with plans to expand to other species as data becomes available. Starting from well-studied cell lines, we will progressively expand our library of measurements to a greater variety of samples. Ensembl’s regulation resources provide a central and easy-to-query repository for reference epigenomes. As with all Ensembl data, it is freely available at http://www.ensembl.org, from the Perl and REST APIs and from the public Ensembl MySQL database server at ensembldb.ensembl.org. Database URL: http://www.ensembl.org PMID:26888907

  13. Data Quality Control and Maintenance for the Qweak Experiment

    NASA Astrophysics Data System (ADS)

    Heiner, Nicholas; Spayde, Damon

    2014-03-01

    The Qweak collaboration seeks to quantify the weak charge of a proton through the analysis of the parity-violating electron asymmetry in elastic electron-proton scattering. The asymmetry is calculated by measuring how many electrons deflect from a hydrogen target at the chosen scattering angle for aligned and anti-aligned electron spins, then evaluating the difference between the numbers of deflections that occurred for both polarization states. The weak charge can then be extracted from this data. Knowing the weak charge will allow us to calculate the electroweak mixing angle for the particular Q2 value of the chosen electrons, which the Standard Model makes a firm prediction for. Any significant deviation from this prediction would be a prime indicator of the existence of physics beyond what the Standard Model describes. After the experiment was conducted at Jefferson Lab, collected data was stored within a MySQL database for further analysis. I will present an overview of the database and its functions as well as a demonstration of the quality checks and maintenance performed on the data itself. These checks include an analysis of errors occurring throughout the experiment, specifically data acquisition errors within the main detector array, and an analysis of data cuts.

  14. Implementation of electronic logbook for trainees of general surgery in Thailand.

    PubMed

    Aphinives, Potchavit

    2013-01-01

    All trainees are required to keep a record of their surgical skill and experiences throughout the trainingperiod in a logbook format. Paper-based logbook has several limitations. Therefore, an electronic logbook was introduced to replace the paper-based logbook. An electronic logbook program was developed in November 2005. This program was designed as web-based application based upon PHP scripts beneath Apache web server and MySQL database implementation. Only simpliJfied and essential data, such as hospital number diagnosis, surgical procedure, and pathological findings, etc. are recorded. The electronic logbook databases between Academic year 2006 and 2011 were analyzed. The annual recordedsurgical procedures gradually increasedfrom 41,214 procedures in 2006 to 66,643 procedures in 2011. Around one-third of all records were not verified by attending staffs, i.e. 27.59% (2006), 31.69% (2007), 18.06% (2008), 28.42% (2009), 30.18% (2010), and 31.41% (2011). On the Education year 2011, the three most common procedural groups included colon, rectum & anus group, appendix group, and vascular group, respectively. Advantages of the electronic logbook included more efficient data access, increased ability to monitor trainees and trainers, and analysis of procedural varieties among the training institutes.

  15. A third of patients treated at a tertiary-level surgical service could be treated at a secondary-level facility.

    PubMed

    Van Straten, S; Stannard, C; Bulabula, J; Boodhia, K; Paul, K; Leong, J; Klipin, M J

    2017-08-25

    South Africa (SA) has an overburdened public healthcare system. Some patients admitted to Charlotte Maxeke Johannesburg Academic Hospital (CMJAH), SA, may not require tertiary care, but the numbers and details are uncertain. Clinical research in SA is limited by scarce skills and limited access to data. To determine the proportion of and length of stay for secondary-, tertiary- and quaternary-level patients discharged from the Department of Surgery at CMJAH over 1 year. This is a retrospective analysis of electronic discharge (ED) summaries from the Department of Surgery at CMJAH between 1 April 2015 and 1 April 2016. An SQL query of the database generated a .csv file of all discharges with the following fields: database reference number, length of stay and level of care. The details of each record were verified by MBBCh V students, using a defined level-ofcare template and the full discharge summary. The data were reviewed by a senior clinician. There were 3 007 discharge summaries - 97 were not classifiable, two were test records and one was a duplicate. These 100 records were excluded. There were no primary-level records. Secondary-level patients represented 29% (854) of those discharged and 19% of total bed days. Tertiary- and quaternary-level patients together represented 71% of the total and 81% of bed days. The average length of stay was 4.31 days for secondary, 6.98 days for tertiary and 9.77 days for quaternary level-of-care allocation. Almost one-third (29%) of patients discharged from CMJAH's Department of Surgery were deemed suitable for secondarylevel care. These patients had a shorter length of stay and comprised 19% of total bed days. Students and electronic databases represent an important research resource.

  16. Distributed Episodic Exploratory Planning (DEEP)

    DTIC Science & Technology

    2008-12-01

    API). For DEEP, Hibernate offered the following advantages: • Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity... Hibernate SQL ICCRTS International Command and Control Research and Technology Symposium JDB Java Distributed Blackboard JDBC Java Database Connectivity...selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of

  17. Investigating the Limitations of Advanced Design Methods through Real World Application

    DTIC Science & Technology

    2016-03-31

    36 War Room Laptop Display ( MySQL , JMP 9 Pro, 64-bit Windows) Georgia Tech Secure Collaborative Visualization Environment ( MySQL , JMP 9 Pro...investigate expanding the EA for VC3ATS • Would like to consider both an expansion of the use of current Java -based BPM approach and other potential EA

  18. A 5E Learning Cycle Approach-Based, Multimedia-Supplemented Instructional Unit for Structured Query Language

    ERIC Educational Resources Information Center

    Piyayodilokchai, Hongsiri; Panjaburee, Patcharin; Laosinchai, Parames; Ketpichainarong, Watcharee; Ruenwongsa, Pintip

    2013-01-01

    With the benefit of multimedia and the learning cycle approach in promoting effective active learning, this paper proposed a learning cycle approach-based, multimedia-supplemented instructional unit for Structured Query Language (SQL) for second-year undergraduate students with the aim of enhancing their basic knowledge of SQL and ability to apply…

  19. Large Declarative Memories in ACT-R

    DTIC Science & Technology

    2009-12-01

    containing the persistent DM of interest PDM-user Username required by the PostgreSQL DBMS for DB access PDM- passwd Password required by the PostgreSQL...34model-v5-DM" :pdm-user "Scott" :pdm- passwd “Open_Seseme" :pdm-resets-clear-db T :pdm-add-dm-serializes T :pdm-active T ... Figure 1: Activating and

  20. Software Application for Supporting the Education of Database Systems

    ERIC Educational Resources Information Center

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  1. Developing an Excel Decision Support System Using In-Transit Visibility to Decrease DoD Transportation Delays

    DTIC Science & Technology

    2008-03-01

    Fortunately, built into Excel is the capability to use ActiveX Data Objects (ADO), a software feature which uses VBA to interface with external...part of Excel’s ActiveX Direct Objects (ADO) functionality, Excel can execute SQL queries in Access with VBA. An SQL query statement can be written

  2. Examining Learning Styles and Perceived Benefits of Analogical Problem Construction on SQL Knowledge Acquisition

    ERIC Educational Resources Information Center

    Mills, Robert J.; Dupin-Bryant, Pamela A.; Johnson, John D.; Beaulieu, Tanya Y.

    2015-01-01

    The demand for Information Systems (IS) graduates with expertise in Structured Query Language (SQL) and database management is vast and projected to increase as "big data" becomes ubiquitous. To prepare students to solve complex problems in a data-driven world, educators must explore instructional strategies to help link prior knowledge…

  3. Knowledge Query Language (KQL)

    DTIC Science & Technology

    2016-02-01

    unlimited. This page intentionally left blank. iii EXECUTIVE SUMMARY Currently, queries for data ...retrieval from non-Structured Query Language (NoSQL) data stores are tightly coupled to the specific implementation of the data store implementation, making...of the storage content and format for querying NoSQL or relational data stores. This approach uses address expressions (or A-Expressions) embedded in

  4. Evolution of the LBT Telemetry System

    NASA Astrophysics Data System (ADS)

    Summers, K.; Biddick, C.; De La Peña, M. D.; Summers, D.

    2014-05-01

    The Large Binocular Telescope (LBT) Telescope Control System (TCS) records about 10GB of telemetry data per night. Additionally, the vibration monitoring system records about 9GB of telemetry data per night. Through 2013, we have amassed over 6TB of Hierarchical Data Format (HDF5) files and almost 9TB in a MySQL database of TCS and vibration data. The LBT telemetry system, in its third major revision since 2004, provides the mechanism to capture and store this data. The telemetry system has evolved from a simple HDF file system with MySQL stream definitions within the TCS, to a separate system using a MySQL database system for the definitions and data, and finally to no database use at all, using HDF5 files.

  5. An improved design method for EPC middleware

    NASA Astrophysics Data System (ADS)

    Lou, Guohuan; Xu, Ran; Yang, Chunming

    2014-04-01

    For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.

  6. An Internet supported workflow for the publication process in UMVF (French Virtual Medical University).

    PubMed

    Renard, Jean-Marie; Bourde, Annabel; Cuggia, Marc; Garcelon, Nicolas; Souf, Nathalie; Darmoni, Stephan; Beuscart, Régis; Brunetaud, Jean-Marc

    2007-01-01

    The " Université Médicale Virtuelle Francophone" (UMVF) is a federation of French medical schools. Its main goal is to share the production and use of pedagogic medical resources generated by academic medical teachers. We developed an Open-Source application based upon a workflow system, which provides an improved publication process for the UMVF. For teachers, the tool permits easy and efficient upload of new educational resources. For web masters it provides a mechanism to easily locate and validate the resources. For librarian it provide a way to improve the efficiency of indexation. For all, the utility provides a workflow system to control the publication process. On the students side, the application improves the value of the UMVF repository by facilitating the publication of new resources and by providing an easy way to find a detailed description of a resource and to check any resource from the UMVF to ascertain its quality and integrity, even if the resource is an old deprecated version. The server tier of the application is used to implement the main workflow functionalities and is deployed on certified UMVF servers using the PHP language, an LDAP directory and an SQL database. The client tier of the application provides both the workflow and the search and check functionalities. A unique signature for each resource, was needed to provide security functionality and is implemented using a Digest algorithm. The testing performed by Rennes and Lille verified the functionality and conformity with our specifications.

  7. Ubiquitous-Severance Hospital Project: Implementation and Results

    PubMed Central

    Chang, Bung-Chul; Kim, Young-A; Kim, Jee Hea; Jung, Hae Kyung; Kang, Eun Hae; Kang, Hee Suk; Lee, Hyung Il; Kim, Yong Ook; Yoo, Sun Kook; Sunwoo, Ilnam; An, Seo Yong; Jeong, Hye Jeong

    2010-01-01

    Objectives The purpose of this study was to review an implementation of u-Severance information system with focus on electronic hospital records (EHR) and to suggest future improvements. Methods Clinical Data Repository (CDR) of u-Severance involved implementing electronic medical records (EMR) as the basis of EHR and the management of individual health records. EHR were implemented with service enhancements extending to the clinical decision support system (CDSS) and expanding the knowledge base for research with a repository for clinical data and medical care information. Results The EMR system of Yonsei University Health Systems (YUHS) consists of HP integrity superdome servers using MS SQL as a database management system and MS Windows as its operating system. Conclusions YUHS is a high-performing medical institution with regards to efficient management and customer satisfaction; however, after 5 years of implementation of u-Severance system, several limitations with regards to expandability and security have been identified. PMID:21818425

  8. Ubiquitous-severance hospital project: implementation and results.

    PubMed

    Chang, Bung-Chul; Kim, Nam-Hyun; Kim, Young-A; Kim, Jee Hea; Jung, Hae Kyung; Kang, Eun Hae; Kang, Hee Suk; Lee, Hyung Il; Kim, Yong Ook; Yoo, Sun Kook; Sunwoo, Ilnam; An, Seo Yong; Jeong, Hye Jeong

    2010-03-01

    The purpose of this study was to review an implementation of u-Severance information system with focus on electronic hospital records (EHR) and to suggest future improvements. Clinical Data Repository (CDR) of u-Severance involved implementing electronic medical records (EMR) as the basis of EHR and the management of individual health records. EHR were implemented with service enhancements extending to the clinical decision support system (CDSS) and expanding the knowledge base for research with a repository for clinical data and medical care information. The EMR system of Yonsei University Health Systems (YUHS) consists of HP integrity superdome servers using MS SQL as a database management system and MS Windows as its operating system. YUHS is a high-performing medical institution with regards to efficient management and customer satisfaction; however, after 5 years of implementation of u-Severance system, several limitations with regards to expandability and security have been identified.

  9. Quantum displacement receiver for M-ary phase-shift-keyed coherent states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izumi, Shuro; Takeoka, Masahiro; Fujiwara, Mikio

    2014-12-04

    We propose quantum receivers for 3- and 4-ary phase-shift-keyed (PSK) coherent state signals to overcome the standard quantum limit (SQL). Our receiver, consisting of a displacement operation and on-off detectors with or without feedforward, provides an error probability performance beyond the SQL. We show feedforward operations can tolerate the requirement for the detector specifications.

  10. Tomcat, Oracle & XML Web Archive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cothren, D. C.

    2008-01-01

    The TOX (Tomcat Oracle & XML) web archive is a foundation for development of HTTP-based applications using Tomcat (or some other servlet container) and an Oracle RDBMS. Use of TOX requires coding primarily in PL/SQL, JavaScript, and XSLT, but also in HTML, CSS and potentially Java. Coded in Java and PL/SQL itself, TOX provides the foundation for more complex applications to be built.

  11. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    This software is a plug-in that interfaces between the Phoenix Integration's Model Center and the Base SAS 9.2 applications. The end use of the plug-in is to link input and output data that resides in SAS tables or MS SQL to and from "legacy" software programs without recoding. The potential end users are users who need to run legacy code and want data stored in a SQL database.

  13. Web based tools for data manipulation, visualisation and validation with interactive georeferenced graphs

    NASA Astrophysics Data System (ADS)

    Ivankovic, D.; Dadic, V.

    2009-04-01

    Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.

  14. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  15. An Improved Publication Process for the UMVF.

    PubMed

    Renard, Jean-Marie; Brunetaud, Jean-Marc; Cuggia, Marc; Darmoni, Stephan; Lebeux, Pierre; Beuscart, Régis

    2005-01-01

    The "Université Médicale Virtuelle Francophone" (UMVF) is a federation of French medical schools. Its main goal is to share the production and use of pedagogic medical resources generated by academic medical teachers. We developed an Open-Source application based upon a workflow system which provides an improved publication process for the UMVF. For teachers, the tool permits easy and efficient upload of new educational resources. For web masters it provides a mechanism to easily locate and validate the resources. For both the teachers and the web masters, the utility provides the control and communication functions that define a workflow system.For all users, students in particular, the application improves the value of the UMVF repository by providing an easy way to find a detailed description of a resource and to check any resource from the UMVF to ascertain its quality and integrity, even if the resource is an old deprecated version. The server tier of the application is used to implement the main workflow functionalities and is deployed on certified UMVF servers using the PHP language, an LDAP directory and an SQL database. The client tier of the application provides both the workflow and the search and check functionalities and is implemented using a Java applet through a W3C compliant web browser. A unique signature for each resource, was needed to provide security functionality and is implemented using the MD5 Digest algorithm. The testing performed by Rennes and Lille verified the functionality and conformity with our specifications.

  16. MO-F-CAMPUS-T-05: SQL Database Queries to Determine Treatment Planning Resource Usage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, C; Gladstone, D

    2015-06-15

    Purpose: A radiation oncology clinic’s treatment capacity is traditionally thought to be limited by the number of machines in the clinic. As the number of fractions per course decrease and the number of adaptive plans increase, the question of how many treatment plans a clinic can plan becomes increasingly important. This work seeks to lay the ground work for assessing treatment planning resource usage. Methods: Care path templates were created using the Aria 11 care path interface. Care path tasks included key steps in the treatment planning process from the completion of CT simulation through the first radiation treatment. SQLmore » Server Management Studio was used to run SQL queries to extract task completion time stamps along with care path template information and diagnosis codes from the Aria database. 6 months of planning cycles were evaluated. Elapsed time was evaluated in terms of work hours within Monday – Friday, 7am to 5pm. Results: For the 195 validated treatment planning cycles, the average time for planning and MD review was 22.8 hours. Of those cases 33 were categorized as urgent. The average planning time for urgent plans was 5 hours. A strong correlation between diagnosis code and range of elapsed planning time was as well as between elapsed time and select diagnosis codes was observed. It was also observed that tasks were more likely to be completed on the date due than the time that they were due. Follow-up confirmed that most users did not look at the due time. Conclusion: Evaluation of elapsed planning time and other tasks suggest that care paths should be adjusted to allow for different contouring and planning times for certain diagnosis codes and urgent cases. Additional clinic training around task due times vs dates or a structuring of care paths around due dates is also needed.« less

  17. A database for coconut crop improvement.

    PubMed

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-12-08

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. http://www.bioinfcpcri.org.

  18. Gramene database in 2010: updates and extensions.

    PubMed

    Youens-Clark, Ken; Buckler, Ed; Casstevens, Terry; Chen, Charles; Declerck, Genevieve; Derwent, Paul; Dharmawardhana, Palitha; Jaiswal, Pankaj; Kersey, Paul; Karthikeyan, A S; Lu, Jerry; McCouch, Susan R; Ren, Liya; Spooner, William; Stein, Joshua C; Thomason, Jim; Wei, Sharon; Ware, Doreen

    2011-01-01

    Now in its 10th year, the Gramene database (http://www.gramene.org) has grown from its primary focus on rice, the first fully-sequenced grass genome, to become a resource for major model and crop plants including Arabidopsis, Brachypodium, maize, sorghum, poplar and grape in addition to several species of rice. Gramene began with the addition of an Ensembl genome browser and has expanded in the last decade to become a robust resource for plant genomics hosting a wide array of data sets including quantitative trait loci (QTL), metabolic pathways, genetic diversity, genes, proteins, germplasm, literature, ontologies and a fully-structured markers and sequences database integrated with genome browsers and maps from various published studies (genetic, physical, bin, etc.). In addition, Gramene now hosts a variety of web services including a Distributed Annotation Server (DAS), BLAST and a public MySQL database. Twice a year, Gramene releases a major build of the database and makes interim releases to correct errors or to make important updates to software and/or data.

  19. CGDSNPdb: a database resource for error-checked and imputed mouse SNPs.

    PubMed

    Hutchins, Lucie N; Ding, Yueming; Szatkiewicz, Jin P; Von Smith, Randy; Yang, Hyuna; de Villena, Fernando Pardo-Manuel; Churchill, Gary A; Graber, Joel H

    2010-07-06

    The Center for Genome Dynamics Single Nucleotide Polymorphism Database (CGDSNPdb) is an open-source value-added database with more than nine million mouse single nucleotide polymorphisms (SNPs), drawn from multiple sources, with genotypes assigned to multiple inbred strains of laboratory mice. All SNPs are checked for accuracy and annotated for properties specific to the SNP as well as those implied by changes to overlapping protein-coding genes. CGDSNPdb serves as the primary interface to two unique data sets, the 'imputed genotype resource' in which a Hidden Markov Model was used to assess local haplotypes and the most probable base assignment at several million genomic loci in tens of strains of mice, and the Affymetrix Mouse Diversity Genotyping Array, a high density microarray with over 600,000 SNPs and over 900,000 invariant genomic probes. CGDSNPdb is accessible online through either a web-based query tool or a MySQL public login. Database URL: http://cgd.jax.org/cgdsnpdb/

  20. [Computer-aided Diagnosis and New Electronic Stethoscope].

    PubMed

    Huang, Mei; Liu, Hongying; Pi, Xitian; Ao, Yilu; Wang, Zi

    2017-05-30

    Auscultation is an important method in early-diagnosis of cardiovascular disease and respiratory system disease. This paper presents a computer-aided diagnosis of new electronic auscultation system. It has developed an electronic stethoscope based on condenser microphone and the relevant intelligent analysis software. It has implemented many functions that combined with Bluetooth, OLED, SD card storage technologies, such as real-time heart and lung sounds auscultation in three modes, recording and playback, auscultation volume control, wireless transmission. The intelligent analysis software based on PC computer utilizes C# programming language and adopts SQL Server as the background database. It has realized play and waveform display of the auscultation sound. By calculating the heart rate, extracting the characteristic parameters of T1, T2, T12, T11, it can analyze whether the heart sound is normal, and then generate diagnosis report. Finally the auscultation sound and diagnosis report can be sent to mailbox of other doctors, which can carry out remote diagnosis. The whole system has features of fully function, high portability, good user experience, and it is beneficial to promote the use of electronic stethoscope in the hospital, at the same time, the system can also be applied to auscultate teaching and other occasions.

  1. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  2. De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.

    2012-12-01

    Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed amount of oxygen, or of cation (using an analysis in element or oxide weight-%); this latter includes re-calculation of H2O/CO2 based on stoichiometry, and oxygen correction for F and Cl. Another option offers a list of any available standards and possible peak or background interferences for a series of elements. (3) "X-ray maps" lists the different setups recommended for element mapping using WDS, and a map calculator to facilitate maps setups and to estimate the total mapping time. (4) "X-ray data" lists all x-ray lines for a specific element (K, L, M, absorption edges, and satellite peaks) in term of energy, wavelength and peak position. A check for possible interferences on peak or background is also possible. Theoretical x-ray peak positions for each crystal are calculated based on the 2d spacing of each crystal and the wavelength of each line. (5) "Agenda" menu displays the reservation dates for each month and for each EMP lab defined. It also offers a reservation request option, this request being sent by email to the EMP manager for approval. (6) Finally, "Admin" is password restricted, and contains all necessary options to manage the database through user-friendly forms. The installation of this database is made easy and knowledge of HTML, PHP, or MySQL is unnecessary to install, configure, manage, or use it. A working database is accessible at http://cub.geoloweb.ch.

  3. Comparisons of Attacks on Honeypots With Those on Real Networks

    DTIC Science & Technology

    2006-03-01

    Oracle , MySQL , or PostgreSQL. Figure 2 shows an incoming packet and the process involved before and after the Snort engine detects the suspicious...stored on a separate, secured system.”[2]. Honeypots have several other uses besides monitoring attackers. They serve to protect real networks and...interaction vs . high-interaction. Although, both low-interaction and high-interaction honeypots are effective in soliciting attacks, high-interaction

  4. Network Security Visualization

    DTIC Science & Technology

    1999-09-27

    performing SQL generation and result-set binding, inserting acquired security events into the database and gathering the requested data for Console scene...objects is also auto-generated by a VBA script. Built into the auto-generated table access objects are the preferred join paths between tables. This...much of the Server itself) never have to deal with SQL directly. This is one aspect of laying the groundwork for supporting RDBMSs from multiple vendors

  5. A Data Warehouse to Support Condition Based Maintenance (CBM)

    DTIC Science & Technology

    2005-05-01

    Application ( VBA ) code sequence to import the original MAST-generated CSV and then create a single output table in DBASE IV format. The DBASE IV format...database architecture (Oracle, Sybase, MS- SQL , etc). This design includes table definitions, comments, specification of table attributes, primary and foreign...built queries and applications. Needs the application developers to construct data views. No SQL programming experience. b. Power Database User - knows

  6. Development of a Big Data Application Architecture for Navy Manpower, Personnel, Training, and Education

    DTIC Science & Technology

    2016-03-01

    science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata

  7. A database for coconut crop improvement

    PubMed Central

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-01-01

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. Availability http://www.bioinfcpcri.org PMID:17597858

  8. Quality Attribute-Guided Evaluation of NoSQL Databases: An Experience Report

    DTIC Science & Technology

    2014-10-18

    detailed technical evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study... big data , software systems [Agarwal 2011]. Internet-born organizations such as Google and Amazon are at the cutting edge of this revolution...Chang 2008], along with those of numerous other big data innovators, have made a variety of open source and commercial data management technologies

  9. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  10. High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan

    2010-10-04

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less

  11. Factors Leading to Effectiveness and Satisfaction in Civil Engineer Information Systems

    DTIC Science & Technology

    2008-03-01

    recently acquired MySQL in 2008 shortly after Oracle failed to acquire MySQL in 2007. For more information on policy implications concerning the use...individual level serves as the pertinent outcome variable and is used to evaluate and compare information systems in this study. Researchers have found...interim work information management system used by the Civil Engineer Operations Flight. The functions served by this system date back to the late

  12. Information Flow Integrity for Systems of Independently-Developed Components

    DTIC Science & Technology

    2015-06-22

    We also examined three programs (Apache, MySQL , and PHP) in detail to evaluate the efficacy of using the provided package test suites to generate...method are just as effective as hooks that were manually placed over the course of years while greatly reducing the burden on programmers. ”Leveraging...to validate optimizations of real-world, mature applications: the Apache software suite, the Mozilla Suite, and the MySQL database. ”Validating Library

  13. Using XML/HTTP to Store, Serve and Annotate Tactical Scenarios for X3D Operational Visualization and Anti-Terrorist Training

    DTIC Science & Technology

    2003-03-01

    PXSLServlet Paul A. Open Source Relational x X 23 Tchistopolskii sql2dtd David Mertz Public domain Relational x -- sql2xml Scott Hathaway Public...March 2003. [Hunter 2001] Hunter, David ; Cagle, Kurt; Dix, Chris; Kovack, Roger; Pinnock, Jonathan, Rafter, Jeff; Beginning XML (2nd Edition...Postgraduate School Monterey, California 4. Curt Blais Naval Postgraduate School Monterey, California 5 Erik Chaum NAVSEA Undersea

  14. Large-scale Graph Computation on Just a PC

    DTIC Science & Technology

    2014-05-01

    edges for several vertices simultaneously). We compared the performance of GraphChi-DB to Neo4j using their Java API (we discuss MySQL comparison in the...75 4.7.6 Comparison to RDBMS ( MySQL ) . . . . . . . . . . . . . . . . . . . . . 75 4.7.7 Summary of the...Windows method, GraphChi. The C++ implementation has circa 8,000 lines of code. We have also de- veloped a Java -version of GraphChi, but it does not

  15. Delayed Instantiation Bulk Operations for Management of Distributed, Object-Based Storage Systems

    DTIC Science & Technology

    2009-08-01

    source and destination object sets, while they have attribute pages to indicate that history . Fourth, we allow for operations to occur on any objects...client dialogue to the PostgreSQL database where server-side functions implement the service logic for the requests. The translation is done...to satisfy client requests, and performs delayed instantiation bulk operations. It is built around a PostgreSQL database with tables for storing

  16. CHASM and SNVBox: toolkit for detecting biologically important single nucleotide mutations in cancer.

    PubMed

    Wong, Wing Chung; Kim, Dewey; Carter, Hannah; Diekhans, Mark; Ryan, Michael C; Karchin, Rachel

    2011-08-01

    Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM.

  17. Motivational and metacognitive feedback in SQL-Tutor*

    NASA Astrophysics Data System (ADS)

    Hull, Alison; du Boulay, Benedict

    2015-04-01

    Motivation and metacognition are strongly intertwined, with learners high in self-efficacy more likely to use a variety of self-regulatory learning strategies, as well as to persist longer on challenging tasks. The aim of the research was to improve the learner's focus on the process and experience of problem-solving while using an Intelligent Tutoring System (ITS) and including motivational and metacognitive feedback based on the learner's past states and experiences. An existing ITS, SQL-Tutor, was used with first-year undergraduates studying a database module. The study used two versions of SQL-Tutor: the Control group used a base version providing domain feedback and the Study group used an extended version that also provided motivational and metacognitive feedback. This paper summarises the pre- and post-process results. Comparisons between groups showed some differing trends both in learning outcomes and behaviour in favour of the Study group.

  18. 0.75 atoms improve the clock signal of 10,000 atoms

    NASA Astrophysics Data System (ADS)

    Kruse, I.; Lange, K.; Peise, J.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Lisdat, C.; Santos, L.; Smerzi, A.; Klempt, C.

    2017-02-01

    Since the pioneering work of Ramsey, atom interferometers are employed for precision metrology, in particular to measure time and to realize the second. In a classical interferometer, an ensemble of atoms is prepared in one of the two input states, whereas the second one is left empty. In this case, the vacuum noise restricts the precision of the interferometer to the standard quantum limit (SQL). Here, we propose and experimentally demonstrate a novel clock configuration that surpasses the SQL by squeezing the vacuum in the empty input state. We create a squeezed vacuum state containing an average of 0.75 atoms to improve the clock sensitivity of 10,000 atoms by 2.05 dB. The SQL poses a significant limitation for today's microwave fountain clocks, which serve as the main time reference. We evaluate the major technical limitations and challenges for devising a next generation of fountain clocks based on atomic squeezed vacuum.

  19. Improvement of an Atomic Clock using Squeezed Vacuum

    NASA Astrophysics Data System (ADS)

    Kruse, I.; Lange, K.; Peise, J.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Lisdat, C.; Santos, L.; Smerzi, A.; Klempt, C.

    2016-09-01

    Since the pioneering work of Ramsey, atom interferometers are employed for precision metrology, in particular to measure time and to realize the second. In a classical interferometer, an ensemble of atoms is prepared in one of the two input states, whereas the second one is left empty. In this case, the vacuum noise restricts the precision of the interferometer to the standard quantum limit (SQL). Here, we propose and experimentally demonstrate a novel clock configuration that surpasses the SQL by squeezing the vacuum in the empty input state. We create a squeezed vacuum state containing an average of 0.75 atoms to improve the clock sensitivity of 10000 atoms by 2.05-0.37 +0 .34 dB . The SQL poses a significant limitation for today's microwave fountain clocks, which serve as the main time reference. We evaluate the major technical limitations and challenges for devising a next generation of fountain clocks based on atomic squeezed vacuum.

  20. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency

    PubMed Central

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254

  1. Protecting Database Centric Web Services against SQL/XPath Injection Attacks

    NASA Astrophysics Data System (ADS)

    Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique

    Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.

  2. Managing Complex Interoperability Solutions using Model-Driven Architecture

    DTIC Science & Technology

    2011-06-01

    such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information

  3. Information Collection using Handheld Devices in Unreliable Networking Environments

    DTIC Science & Technology

    2014-06-01

    different types of mobile devices that connect wirelessly to a database 8 server. The actual backend database is not important to the mobile clients...Google’s infrastructure and local servers with MySQL and PostgreSQL on the backend (ODK 2014b). (2) Google Fusion Tables are used to do basic link...how we conduct business. Our requirements to share information do not change simply because there is little or no existing infrastructure in our

  4. Reuse of the Cloud Analytics and Collaboration Environment within Tactical Applications (TacApps): A Feasibility Analysis

    DTIC Science & Technology

    2016-03-01

    Representational state transfer  Java messaging service  Java application programming interface (API)  Internet relay chat (IRC)/extensible messaging and...JBoss application server or an Apache Tomcat servlet container instance. The relational database management system can be either PostgreSQL or MySQL ... Java library called direct web remoting. This library has been part of the core CACE architecture for quite some time; however, there have not been

  5. MicroRNA Gene Regulatory Networks in Peripheral Nerve Sheath Tumors

    DTIC Science & Technology

    2013-09-01

    3.0 hierarchical clustering of both the X and the Y-axis using Centroid linkage. The resulting clustered matrixes were visualized using Java Treeview...To score potential ceRNA interactions, the 54979 human interactions were loaded into a mySQL database and when the user selects a given mRNA all...on the fly using PHP interactions with mySQL in a similar fashion as previously described in our publicly available databases such as sarcoma

  6. Understanding and Capturing People’s Mobile App Privacy Preferences

    DTIC Science & Technology

    2013-10-28

    The entire apps’ metadata takes up about 500MB of storage space when stored in a MySQL database and all the binary files take approximately 300GB of...functionality that can de- compile Dalvik bytecodes to Java source code faster than other de-compilers. Given the scale of the app analysis we planned on... java libraries, such as parser, sql connectors, etc Targeted Ads 137 admob, adwhirl, greystripe… Provided by mobile behavioral ads company to

  7. Shark: SQL and Analytics with Cost-Based Query Optimization on Coarse-Grained Distributed Memory

    DTIC Science & Technology

    2014-01-13

    RDBMS and contains a database (often MySQL or Derby) with a namespace for tables, table metadata and partition information. Table data is stored in an...serialization/deserialization) Java interface implementations with corresponding object inspectors. The Hive driver controls the processing of queries, coordinat...native API, RDD operations are invoked through a functional interface similar to DryadLINQ [32] in Scala, Java or Python. For example, the Scala code for

  8. Implementing an Intrusion Detection System in the Mysea Architecture

    DTIC Science & Technology

    2008-06-01

    password for each user passwd <username> then follow the prompts 2. PostgreSQL 7.4.18 Installation Perform the following steps as root: 1. Copy...password changed Repeat for user snort. exit After making the groups and users the group and passwd file needs to be updated. Set security and...untrusted/bin/xtsmkgroup > /etc/group chmod 644 /etc/group /xts/untrusted/bin/xtsmkpasswd > /etc/ passwd chmod 644 /etc/ passwd 3. PostgreSQL 7.4.18

  9. Astronomical Archive at Tartu Observatory

    NASA Astrophysics Data System (ADS)

    Annuk, K.

    2007-10-01

    Archiving astronomical data is important task not only at large observatories but also at small observatories. Here we describe the astronomical archive at Tartu Observatory. The archive consists of old photographic plate images, photographic spectrograms, CCD direct--images and CCD spectroscopic data. The photographic plate digitizing project was started in 2005. An on-line database (based on MySQL) was created. The database includes CCD data as well photographic data. A PHP-MySQL interface was written for access to all data.

  10. Shark: SQL and Rich Analytics at Scale

    DTIC Science & Technology

    2012-11-26

    learning programs up to 100 faster than Hadoop. Unlike previous systems, Shark shows that it is possible to achieve these speedups while retaining a...Shark to run SQL queries up to 100× faster than Apache Hive, and machine learning programs up to 100× faster than Hadoop. Unlike previous systems, Shark...so using a runtime that is optimized for such workloads and a programming model that is designed to express machine learn - ing algorithms. 4.1

  11. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  12. The HARPS-N archive through a Cassandra, NoSQL database suite?

    NASA Astrophysics Data System (ADS)

    Molinari, Emilio; Guerra, Jose; Harutyunyan, Avet; Lodi, Marcello; Martin, Adrian

    2016-07-01

    The TNG-INAF is developing the science archive for the WEAVE instrument. The underlying architecture of the archive is based on a non relational database, more precisely, on Apache Cassandra cluster, which uses a NoSQL technology. In order to test and validate the use of this architecture, we created a local archive which we populated with all the HARPSN spectra collected at the TNG since the instrument's start of operations in mid-2012, as well as developed tools for the analysis of this data set. The HARPS-N data set is two orders of magnitude smaller than WEAVE, but we want to demonstrate the ability to walk through a complete data set and produce scientific output, as valuable as that produced by an ordinary pipeline, though without accessing directly the FITS files. The analytics is done by Apache Solr and Spark and on a relational PostgreSQL database. As an example, we produce observables like metallicity indexes for the targets in the archive and compare the results with the ones coming from the HARPS-N regular data reduction software. The aim of this experiment is to explore the viability of a high availability cluster and distributed NoSQL database as a platform for complex scientific analytics on a large data set, which will then be ported to the WEAVE Archive System (WAS) which we are developing for the WEAVE multi object, fiber spectrograph.

  13. Geographic Video 3d Data Model And Retrieval

    NASA Astrophysics Data System (ADS)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  14. Online service for monitoring the ionosphere based on data from the global navigation satellite system

    NASA Astrophysics Data System (ADS)

    Aleshin, I. M.; Alpatov, V. V.; Vasil'ev, A. E.; Burguchev, S. S.; Kholodkov, K. I.; Budnikov, P. A.; Molodtsov, D. A.; Koryagin, V. N.; Perederin, F. V.

    2014-07-01

    A service is described that makes possible the effective construction of a three-dimensional ionospheric model based on the data of ground receivers of signals from global navigation satellite positioning systems (GNSS). The obtained image has a high resolution, mainly because data from the IPG GNSS network of the Federal Service for Hydrometeorology and Environmental Monitoring (Rosgidromet) are used. A specially developed format and its implementation in the form of SQL structures are used to collect, transmit, and store data. The method of high-altitude radio tomography is used to construct the three-dimensional model. The operation of all system components (from registration point organization to the procedure for constructing the electron density three-dimensional distribution and publication of the total electron content map on the Internet) has been described in detail. The three-dimensional image of the ionosphere, obtained automatically, is compared with the ionosonde measurements, calculated using the two-dimensional low-altitude tomography method and averaged by the ionospheric model.

  15. CGDM: collaborative genomic data model for molecular profiling data using NoSQL.

    PubMed

    Wang, Shicai; Mares, Mihaela A; Guo, Yi-Ke

    2016-12-01

    High-throughput molecular profiling has greatly improved patient stratification and mechanistic understanding of diseases. With the increasing amount of data used in translational medicine studies in recent years, there is a need to improve the performance of data warehouses in terms of data retrieval and statistical processing. Both relational and Key Value models have been used for managing molecular profiling data. Key Value models such as SeqWare have been shown to be particularly advantageous in terms of query processing speed for large datasets. However, more improvement can be achieved, particularly through better indexing techniques of the Key Value models, taking advantage of the types of queries which are specific for the high-throughput molecular profiling data. In this article, we introduce a Collaborative Genomic Data Model (CGDM), aimed at significantly increasing the query processing speed for the main classes of queries on genomic databases. CGDM creates three Collaborative Global Clustering Index Tables (CGCITs) to solve the velocity and variety issues at the cost of limited extra volume. Several benchmarking experiments were carried out, comparing CGDM implemented on HBase to the traditional SQL data model (TDM) implemented on both HBase and MySQL Cluster, using large publicly available molecular profiling datasets taken from NCBI and HapMap. In the microarray case, CGDM on HBase performed up to 246 times faster than TDM on HBase and 7 times faster than TDM on MySQL Cluster. In single nucleotide polymorphism case, CGDM on HBase outperformed TDM on HBase by up to 351 times and TDM on MySQL Cluster by up to 9 times. The CGDM source code is available at https://github.com/evanswang/CGDM. y.guo@imperial.ac.uk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. High dimensional biological data retrieval optimization with NoSQL technology.

    PubMed

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data.

  17. High dimensional biological data retrieval optimization with NoSQL technology

    PubMed Central

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data. PMID:25435347

  18. A Look Under the Hood: How the JPL Tropical Cyclone Information System Uses Database Technologies to Present Big Data to Users

    NASA Astrophysics Data System (ADS)

    Knosp, B.; Gangl, M.; Hristova-Veleva, S. M.; Kim, R. M.; Li, P.; Turk, J.; Vu, Q. A.

    2015-12-01

    The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data and model forecast related to tropical cyclones. The TCIS has been running a near-real time (NRT) data portal during North Atlantic hurricane season that typically runs from June through October each year, since 2010. Data collected by the TCIS varies by type, format, contents, and frequency and is served to the user in two ways: (1) as image overlays on a virtual globe and (2) as derived output from a suite of analysis tools. In order to support these two functions, the data must be collected and then made searchable by criteria such as date, mission, product, pressure level, and geospatial region. Creating a database architecture that is flexible enough to manage, intelligently interrogate, and ultimately present this disparate data to the user in a meaningful way has been the primary challenge. The database solution for the TCIS has been to use a hybrid MySQL + Solr implementation. After testing other relational database and NoSQL solutions, such as PostgreSQL and MongoDB respectively, this solution has given the TCIS the best offerings in terms of query speed and result reliability. This database solution also supports the challenging (and memory overwhelming) geospatial queries that are necessary to support analysis tools requested by users. Though hardly new technologies on their own, our implementation of MySQL + Solr had to be customized and tuned to be able to accurately store, index, and search the TCIS data holdings. In this presentation, we will discuss how we arrived on our MySQL + Solr database architecture, why it offers us the most consistent fast and reliable results, and how it supports our front end so that we can offer users a look into our "big data" holdings.

  19. High performance semantic factoring of giga-scale semantic graph databases.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    al-Saffar, Sinan; Adolf, Bob; Haglin, David

    2010-10-01

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less

  20. Workstation Analytics in Distributed Warfighting Experimentation: Results from Coalition Attack Guidance Experiment 3A

    DTIC Science & Technology

    2014-06-01

    central location. Each of the SQLite databases are converted and stored in one MySQL database and the pcap files are parsed to extract call information...from the specific communications applications used during the experiment. This extracted data is then stored in the same MySQL database. With all...rhythm of the event. Figure 3 demonstrates the application usage over the course of the experiment for the EXDIR. As seen, the EXDIR spent the majority

  1. An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data

    DTIC Science & Technology

    2016-01-01

    these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation

  2. Design and Implementation of 3D Model Data Management System Based on SQL

    NASA Astrophysics Data System (ADS)

    Li, Shitao; Zhang, Shixin; Zhang, Zhanling; Li, Shiming; Jia, Kun; Hu, Zhongxu; Ping, Liang; Hu, Youming; Li, Yanlei

    CAD/CAM technology plays an increasingly important role in the machinery manufacturing industry. As an important means of production, the accumulated three-dimensional models in many years of design work are valuable. Thus the management of these three-dimensional models is of great significance. This paper gives detailed explanation for a method to design three-dimensional model databases based on SQL and to implement the functions such as insertion, modification, inquiry, preview and so on.

  3. Cloud-Based NoSQL Open Database of Pulmonary Nodules for Computer-Aided Lung Cancer Diagnosis and Reproducible Research.

    PubMed

    Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini

    2016-12-01

    Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.

  4. A web-based data visualization tool for the MIMIC-II database.

    PubMed

    Lee, Joon; Ribey, Evan; Wallace, James R

    2016-02-04

    Although MIMIC-II, a public intensive care database, has been recognized as an invaluable resource for many medical researchers worldwide, becoming a proficient MIMIC-II researcher requires knowledge of SQL programming and an understanding of the MIMIC-II database schema. These are challenging requirements especially for health researchers and clinicians who may have limited computer proficiency. In order to overcome this challenge, our objective was to create an interactive, web-based MIMIC-II data visualization tool that first-time MIMIC-II users can easily use to explore the database. The tool offers two main features: Explore and Compare. The Explore feature enables the user to select a patient cohort within MIMIC-II and visualize the distributions of various administrative, demographic, and clinical variables within the selected cohort. The Compare feature enables the user to select two patient cohorts and visually compare them with respect to a variety of variables. The tool is also helpful to experienced MIMIC-II researchers who can use it to substantially accelerate the cumbersome and time-consuming steps of writing SQL queries and manually visualizing extracted data. Any interested researcher can use the MIMIC-II data visualization tool for free to quickly and conveniently conduct a preliminary investigation on MIMIC-II with a few mouse clicks. Researchers can also use the tool to learn the characteristics of the MIMIC-II patients. Since it is still impossible to conduct multivariable regression inside the tool, future work includes adding analytics capabilities. Also, the next version of the tool will aim to utilize MIMIC-III which contains more data.

  5. Detecting Distributed SQL Injection Attacks in a Eucalyptus Cloud Environment

    NASA Technical Reports Server (NTRS)

    Kebert, Alan; Barnejee, Bikramjit; Solano, Juan; Solano, Wanda

    2013-01-01

    The cloud computing environment offers malicious users the ability to spawn multiple instances of cloud nodes that are similar to virtual machines, except that they can have separate external IP addresses. In this paper we demonstrate how this ability can be exploited by an attacker to distribute his/her attack, in particular SQL injection attacks, in such a way that an intrusion detection system (IDS) could fail to identify this attack. To demonstrate this, we set up a small private cloud, established a vulnerable website in one instance, and placed an IDS within the cloud to monitor the network traffic. We found that an attacker could quite easily defeat the IDS by periodically altering its IP address. To detect such an attacker, we propose to use multi-agent plan recognition, where the multiple source IPs are considered as different agents who are mounting a collaborative attack. We show that such a formulation of this problem yields a more sophisticated approach to detecting SQL injection attacks within a cloud computing environment.

  6. CHASM and SNVBox: toolkit for detecting biologically important single nucleotide mutations in cancer

    PubMed Central

    Carter, Hannah; Diekhans, Mark; Ryan, Michael C.; Karchin, Rachel

    2011-01-01

    Summary: Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. Availability and Implementation: MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM. Contact: karchin@jhu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21685053

  7. Datacube Services in Action, Using Open Source and Open Standards

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Misev, D.

    2016-12-01

    Array Databases comprise novel, promising technology for massive spatio-temporal datacubes, extending the SQL paradigm of "any query, anytime" to n-D arrays. On server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. The rasdaman ("raster data manager") system, which has pioneered Array Databases, is available in open source on www.rasdaman.org. Its declarative query language extends SQL with array operators which are optimized and parallelized on server side. The rasdaman engine, which is part of OSGeo Live, is mature and in operational use databases individually holding dozens of Terabytes. Further, the rasdaman concepts have strongly impacted international Big Data standards in the field, including the forthcoming MDA ("Multi-Dimensional Array") extension to ISO SQL, the OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards, and the forthcoming INSPIRE WCS/WCPS; in both OGC and INSPIRE, OGC is WCS Core Reference Implementation. In our talk we present concepts, architecture, operational services, and standardization impact of open-source rasdaman, as well as experiences made.

  8. Providing R-Tree Support for Mongodb

    NASA Astrophysics Data System (ADS)

    Xiang, Longgang; Shao, Xiaotian; Wang, Dehao

    2016-06-01

    Supporting large amounts of spatial data is a significant characteristic of modern databases. However, unlike some mature relational databases, such as Oracle and PostgreSQL, most of current burgeoning NoSQL databases are not well designed for storing geospatial data, which is becoming increasingly important in various fields. In this paper, we propose a novel method to provide R-tree index, as well as corresponding spatial range query and nearest neighbour query functions, for MongoDB, one of the most prevalent NoSQL databases. First, after in-depth analysis of MongoDB's features, we devise an efficient tabular document structure which flattens R-tree index into MongoDB collections. Further, relevant mechanisms of R-tree operations are issued, and then we discuss in detail how to integrate R-tree into MongoDB. Finally, we present the experimental results which show that our proposed method out-performs the built-in spatial index of MongoDB. Our research will greatly facilitate big data management issues with MongoDB in a variety of geospatial information applications.

  9. SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases

    NASA Astrophysics Data System (ADS)

    Pinzón, Cristian; de Paz, Yanira; Bajo, Javier; Abraham, Ajith; Corchado, Juan M.

    One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a kind of intelligent agent, which integrates a case-based reasoning system. This agent, which is the core of the architecture, allows the application of detection techniques based on anomalies as well as those based on patterns, providing a great degree of autonomy, flexibility, robustness and dynamic scalability. The characteristics of the multiagent system allow an architecture to detect attacks from different types of devices, regardless of the physical location. The architecture has been tested on a medical database, guaranteeing safe access from various devices such as PDAs and notebook computers.

  10. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  11. Statewide Inventories of Heritage Resources: Macris and the Experience in Massachusetts

    NASA Astrophysics Data System (ADS)

    Stott, P. H.

    2017-08-01

    The Massachusetts Historical Commission (MHC) is the State Historic Preservation Office for Massachusetts. Established in 1963, MHC has been inventorying historic properties for over half a century. Since 1987, it has maintained a heritage database, the Massachusetts Cultural Resource Information System, or MACRIS. Today MACRIS holds over 206,000 records from the 351 towns and cities across the Commonwealth. Since 2004, a selection of the more than 150 MACRIS fields has been available online at mhcmacris. net. MACRIS is widely used by independent consultants preparing project review files, by MHC staff in its regulatory responsibilities, by local historical commissions monitoring threats to their communities, as well as by scholars, historical organizations, genealogists, property owners, reporters, and the general public interested in the history of the built environment. In 2016 MACRIS began migration off of its three-decade old Pick multivalue database to SQL Server, and in 2017, the first redesign of its thirteen-year old web interface should start to improve usability. Longer-term improvements have the goal of standardizing terminology and ultimately bringing interoperability with other heritage databases closer to reality.

  12. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    NASA Technical Reports Server (NTRS)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  13. Electronic patient record and archive of records in Cardio.net system for telecardiology.

    PubMed

    Sierdziński, Janusz; Karpiński, Grzegorz

    2003-01-01

    In modern medicine the well structured patient data set, fast access to it and reporting capability become an important question. With the dynamic development of information technology (IT) such question is solved via building electronic patient record (EPR) archives. We then obtain fast access to patient data, diagnostic and treatment protocols etc. It results in more efficient, better and cheaper treatment. The aim of the work was to design a uniform Electronic Patient Record, implemented in cardio.net system for telecardiology allowing the co-operation among regional hospitals and reference centers. It includes questionnaires for demographic data and questionnaires supporting doctor's work (initial diagnosis, final diagnosis, history and physical, ECG at the discharge, applied treatment, additional tests, drugs, daily and periodical reports). The browser is implemented in EPR archive to facilitate data retrieval. Several tools for creating EPR and EPR archive were used such as: XML, PHP, Java Script and MySQL. The separate question is the security of data on WWW server. The security is ensured via Security Socket Layer (SSL) protocols and other tools. EPR in Cardio.net system is a module enabling the co-work of many physicians and the communication among different medical centers.

  14. Photo-z-SQL: Integrated, flexible photometric redshift computation in a database

    NASA Astrophysics Data System (ADS)

    Beck, R.; Dobos, L.; Budavári, T.; Szalay, A. S.; Csabai, I.

    2017-04-01

    We present a flexible template-based photometric redshift estimation framework, implemented in C#, that can be seamlessly integrated into a SQL database (or DB) server and executed on-demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and utilizes the computational capabilities of DB hardware. The code is able to perform both maximum likelihood and Bayesian estimation, and can handle inputs of variable photometric filter sets and corresponding broad-band magnitudes. It is possible to take into account the full covariance matrix between filters, and filter zero points can be empirically calibrated using measurements with given redshifts. The list of spectral templates and the prior can be specified flexibly, and the expensive synthetic magnitude computations are done via lazy evaluation, coupled with a caching of results. Parallel execution is fully supported. For large upcoming photometric surveys such as the LSST, the ability to perform in-place photo-z calculation would be a significant advantage. Also, the efficient handling of variable filter sets is a necessity for heterogeneous databases, for example the Hubble Source Catalog, and for cross-match services such as SkyQuery. We illustrate the performance of our code on two reference photo-z estimation testing datasets, and provide an analysis of execution time and scalability with respect to different configurations. The code is available for download at https://github.com/beckrob/Photo-z-SQL.

  15. NoSQL technologies for the CMS Conditions Database

    NASA Astrophysics Data System (ADS)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  16. DICOM image integration into an electronic medical record using thin viewing clients

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.

    1998-07-01

    Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.

  17. RICD: a rice indica cDNA database resource for rice functional genomics.

    PubMed

    Lu, Tingting; Huang, Xuehui; Zhu, Chuanrang; Huang, Tao; Zhao, Qiang; Xie, Kabing; Xiong, Lizhong; Zhang, Qifa; Han, Bin

    2008-11-26

    The Oryza sativa L. indica subspecies is the most widely cultivated rice. During the last few years, we have collected over 20,000 putative full-length cDNAs and over 40,000 ESTs isolated from various cDNA libraries of two indica varieties Guangluai 4 and Minghui 63. A database of the rice indica cDNAs was therefore built to provide a comprehensive web data source for searching and retrieving the indica cDNA clones. Rice Indica cDNA Database (RICD) is an online MySQL-PHP driven database with a user-friendly web interface. It allows investigators to query the cDNA clones by keyword, genome position, nucleotide or protein sequence, and putative function. It also provides a series of information, including sequences, protein domain annotations, similarity search results, SNPs and InDels information, and hyperlinks to gene annotation in both The Rice Annotation Project Database (RAP-DB) and The TIGR Rice Genome Annotation Resource, expression atlas in RiceGE and variation report in Gramene of each cDNA. The online rice indica cDNA database provides cDNA resource with comprehensive information to researchers for functional analysis of indica subspecies and for comparative genomics. The RICD database is available through our website http://www.ncgr.ac.cn/ricd.

  18. Effects of Electronic Information Resources Skills Training for Lecturers on Pedagogical Practices and Research Productivity

    ERIC Educational Resources Information Center

    Bhukuvhani, Crispen; Chiparausha, Blessing; Zuvalinyenga, Dorcas

    2012-01-01

    Lecturers use various electronic resources at different frequencies. The university library's information literacy skills workshops and seminars are the main sources of knowledge of accessing electronic resources. The use of electronic resources can be said to have positively affected lecturers' pedagogical practices and their work in general. The…

  19. From Tedious to Timely: Screencasting to Troubleshoot Electronic Resource Issues

    ERIC Educational Resources Information Center

    Hartnett, Eric; Thompson, Carole

    2010-01-01

    The shift from traditional print materials to electronic resources, in conjunction with the rise in the number of distance education programs, has left many electronic resource librarians scrambling to keep up with the resulting inundation of electronic resource problems. When it comes to diagnosing these problems, words do not always convey all…

  20. Electronic Resources and Mission Creep: Reorganizing the Library for the Twenty-First Century

    ERIC Educational Resources Information Center

    Stachokas, George

    2009-01-01

    The position of electronic resources librarian was created to serve as a specialist in the negotiation of license agreements for electronic resources, but mission creep has added more functions to the routine work of electronic resources such as cataloging, gathering information for collection development, and technical support. As electronic…

  1. Re-Engineering the United States Marine Corps’ Enlisted Assignment Model (EAM)

    DTIC Science & Technology

    1998-06-01

    and Execution button opens the switchboard in Figure 12. This form accesses all of the VBA code that is associated with this form and the code that...the prototype to prompt the user and to inform him of what he is about to do. Each of the buttons that are on these forms is connected to an SQL ...into the field for building a rule are the values 2;3;4;5;6;7. The user can enter an SQL statement that would determine the values or the user could

  2. Virtual file system on NoSQL for processing high volumes of HL7 messages.

    PubMed

    Kimura, Eizen; Ishihara, Ken

    2015-01-01

    The Standardized Structured Medical Information Exchange (SS-MIX) is intended to be the standard repository for HL7 messages that depend on a local file system. However, its scalability is limited. We implemented a virtual file system using NoSQL to incorporate modern computing technology into SS-MIX and allow the system to integrate local patient IDs from different healthcare systems into a universal system. We discuss its implementation using the database MongoDB and describe its performance in a case study.

  3. Electronic Resources Evaluation Central: Using Off-the-Shelf Software, Web 2.0 Tools, and LibGuides to Manage an Electronic Resources Evaluation Process

    ERIC Educational Resources Information Center

    England, Lenore; Fu, Li

    2011-01-01

    A critical part of electronic resources management, the electronic resources evaluation process is multi-faceted and includes a seemingly endless range of resources and tools involving numerous library staff. A solution is to build a Web site to bring all of the components together that can be implemented quickly and result in an organizational…

  4. "Science SQL" as a Building Block for Flexible, Standards-based Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2016-04-01

    We have learnt to live with the pain of separating data and metadata into non-interoperable silos. For metadata, we enjoy the flexibility of databases, be they relational, graph, or some other NoSQL. Contrasting this, users still "drown in files" as an unstructured, low-level archiving paradigm. It is time to bridge this chasm which once was technologically induced, but today can be overcome. One building block towards a common re-integrated information space is to support massive multi-dimensional spatio-temporal arrays. These "datacubes" appear as sensor, image, simulation, and statistics data in all science and engineering domains, and beyond. For example, 2-D satellilte imagery, 2-D x/y/t image timeseries and x/y/z geophysical voxel data, and 4-D x/y/z/t climate data contribute to today's data deluge in the Earth sciences. Virtual observatories in the Space sciences routinely generate Petabytes of such data. Life sciences deal with microarray data, confocal microscopy, human brain data, which all fall into the same category. The ISO SQL/MDA (Multi-Dimensional Arrays) candidate standard is extending SQL with modelling and query support for n-D arrays ("datacubes") in a flexible, domain-neutral way. This heralds a new generation of services with new quality parameters, such as flexibility, ease of access, embedding into well-known user tools, and scalability mechanisms that remain completely transparent to users. Technology like the EU rasdaman ("raster data manager") Array Database system can support all of the above examples simultaneously, with one technology. This is practically proven: As of today, rasdaman is in operational use on hundreds of Terabytes of satellite image timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Therefore, Array Databases offering SQL/MDA constitute a natural common building block for next-generation data infrastructures. Being initiator and editor of the standard we present principles, implementation facets, and application examples as a basis for further discussion. Further, we highlight recent implementation progress in parallelization, data distribution, and query optimization showing their effects on real-life use cases.

  5. Agile Datacube Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Merticariu, Vlad; Baumann, Peter

    2017-04-01

    Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well. This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics. We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.

  6. Agile Datacube Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2016-12-01

    Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well.This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics.We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.

  7. Development of an electronic radiation oncology patient information management system.

    PubMed

    Mandal, Abhijit; Asthana, Anupam Kumar; Aggarwal, Lalit Mohan

    2008-01-01

    The quality of patient care is critically influenced by the availability of accurate information and its efficient management. Radiation oncology consists of many information components, for example there may be information related to the patient (e.g., profile, disease site, stage, etc.), to people (radiation oncologists, radiological physicists, technologists, etc.), and to equipment (diagnostic, planning, treatment, etc.). These different data must be integrated. A comprehensive information management system is essential for efficient storage and retrieval of the enormous amounts of information. A radiation therapy patient information system (RTPIS) has been developed using open source software. PHP and JAVA script was used as the programming languages, MySQL as the database, and HTML and CSF as the design tool. This system utilizes typical web browsing technology using a WAMP5 server. Any user having a unique user ID and password can access this RTPIS. The user ID and password is issued separately to each individual according to the person's job responsibilities and accountability, so that users will be able to only access data that is related to their job responsibilities. With this system authentic users will be able to use a simple web browsing procedure to gain instant access. All types of users in the radiation oncology department should find it user-friendly. The maintenance of the system will not require large human resources or space. The file storage and retrieval process would be be satisfactory, unique, uniform, and easily accessible with adequate data protection. There will be very little possibility of unauthorized handling with this system. There will also be minimal risk of loss or accidental destruction of information.

  8. Use of Electronic Resources for Psychiatry Clerkship Learning: A Medical Student Survey.

    PubMed

    Snow, Caitlin E; Torous, John; Gordon-Elliott, Janna S; Penzner, Julie B; Meyer, Fermonta; Boland, Robert

    2017-10-01

    The primary aim of this study is to examine medical students' use patterns, preferences, and perceptions of electronic educational resources available for psychiatry clerkship learning. Eligible participants included medical students who had completed the psychiatry clerkship during a 24-month period. An internet-based questionnaire was used to collect information regarding the outcomes described above. A total of 68 medical students responded to the survey. Most respondents reported high utilization of electronic resources on an array of devices for psychiatry clerkship learning and indicated a preference for electronic over print resources. The most commonly endorsed barriers to the use of electronic resources were that the source contained irrelevant and non-specific content, access was associated with a financial cost, and faculty guidance on recommended resources was insufficient. Respondents indicated a wish for more psychiatry-specific electronic learning resources. The authors' results suggest that a demand exists for high-quality electronic and portable learning tools that are relevant to medical student education in psychiatry. Psychiatry educators are usefully positioned to be involved in the development of such resources.

  9. Improving Broadband Displacement Detection with Quantum Correlations

    NASA Astrophysics Data System (ADS)

    Kampel, N. S.; Peterson, R. W.; Fischer, R.; Yu, P.-L.; Cicak, K.; Simmonds, R. W.; Lehnert, K. W.; Regal, C. A.

    2017-04-01

    Interferometers enable ultrasensitive measurement in a wide array of applications from gravitational wave searches to force microscopes. The role of quantum mechanics in the metrological limits of interferometers has a rich history, and a large number of techniques to surpass conventional limits have been proposed. In a typical measurement configuration, the trade-off between the probe's shot noise (imprecision) and its quantum backaction results in what is known as the standard quantum limit (SQL). In this work, we investigate how quantum correlations accessed by modifying the readout of the interferometer can access physics beyond the SQL and improve displacement sensitivity. Specifically, we use an optical cavity to probe the motion of a silicon nitride membrane off mechanical resonance, as one would do in a broadband displacement or force measurement, and observe sensitivity better than the SQL dictates for our quantum efficiency. Our measurement illustrates the core idea behind a technique known as variational readout, in which the optical readout quadrature is changed as a function of frequency to improve broadband displacement detection. And, more generally, our result is a salient example of how correlations can aid sensing in the presence of backaction.

  10. Experience with ATLAS MySQL PanDA database service

    NASA Astrophysics Data System (ADS)

    Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  11. Ultrabroadband photonic internet: safety aspects

    NASA Astrophysics Data System (ADS)

    Kalicki, Arkadiusz; Romaniuk, Ryszard

    2008-11-01

    Web applications became most popular medium in the Internet. Popularity, easiness of web application frameworks together with careless development results in high number of vulnerabilities and attacks. There are several types of attacks possible because of improper input validation. SQL injection is ability to execute arbitrary SQL queries in a database through an existing application. Cross-site scripting is the vulnerability which allows malicious web users to inject code into the web pages viewed by other users. Cross-Site Request Forgery (CSRF) is an attack that tricks the victim into loading a page that contains malicious request. Web spam in blogs. There are several techniques to mitigate attacks. Most important are web application strong design, correct input validation, defined data types for each field and parameterized statements in SQL queries. Server hardening with firewall, modern security policies systems and safe web framework interpreter configuration are essential. It is advised to keep proper security level on client side, keep updated software and install personal web firewalls or IDS/IPS systems. Good habits are logging out from services just after finishing work and using even separate web browser for most important sites, like e-banking.

  12. Survey data and metadata modelling using document-oriented NoSQL

    NASA Astrophysics Data System (ADS)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  13. Efficient data management tools for the heterogeneous big data warehouse

    NASA Astrophysics Data System (ADS)

    Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.

    2016-09-01

    The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.

  14. VIRmiRNA: a comprehensive resource for experimentally validated viral miRNAs and their targets.

    PubMed

    Qureshi, Abid; Thakur, Nishant; Monga, Isha; Thakur, Anamika; Kumar, Manoj

    2014-01-01

    Viral microRNAs (miRNAs) regulate gene expression of viral and/or host genes to benefit the virus. Hence, miRNAs play a key role in host-virus interactions and pathogenesis of viral diseases. Lately, miRNAs have also shown potential as important targets for the development of novel antiviral therapeutics. Although several miRNA and their target repositories are available for human and other organisms in literature, but a dedicated resource on viral miRNAs and their targets are lacking. Therefore, we have developed a comprehensive viral miRNA resource harboring information of 9133 entries in three subdatabases. This includes 1308 experimentally validated miRNA sequences with their isomiRs encoded by 44 viruses in viral miRNA ' VIRMIRNA: ' and 7283 of their target genes in ' VIRMIRTAR': . Additionally, there is information of 542 antiviral miRNAs encoded by the host against 24 viruses in antiviral miRNA ' AVIRMIR': . The web interface was developed using Linux-Apache-MySQL-PHP (LAMP) software bundle. User-friendly browse, search, advanced search and useful analysis tools are also provided on the web interface. VIRmiRNA is the first specialized resource of experimentally proven virus-encoded miRNAs and their associated targets. This database would enhance the understanding of viral/host gene regulation and may also prove beneficial in the development of antiviral therapeutics. Database URL: http://crdd.osdd.net/servers/virmirna. © The Author(s) 2014. Published by Oxford University Press.

  15. Assessing Ongoing Electronic Resource Purchases: Linking Tools to Synchronize Staff Workflows

    ERIC Educational Resources Information Center

    Carroll, Jeffrey D.; Major, Colleen; O'Neal, Nada; Tofanelli, John

    2012-01-01

    Ongoing electronic resource purchases represent a substantial proportion of collections budgets. Recognizing the necessity of systematic ongoing assessment with full selector engagement, Columbia University Libraries appointed an Electronic Resources Assessment Working Group to promote the inclusion of such resources within our current culture of…

  16. A System for Web-based Access to the HSOS Database

    NASA Astrophysics Data System (ADS)

    Lin, G.

    Huairou Solar Observing Station's (HSOS) magnetogram and dopplergram are world-class instruments. Access to their data has opened to the world. Web-based access to the data will provide a powerful, convenient tool for data searching and solar physics. It is necessary that our data be provided to users via the Web when it is opened to the world. In this presentation, the author describes general design and programming construction of the system. The system will be generated by PHP and MySQL. The author also introduces basic feature of PHP and MySQL.

  17. Optical levitation of a mirror for reaching the standard quantum limit.

    PubMed

    Michimura, Yuta; Kuwahara, Yuya; Ushiba, Takafumi; Matsumoto, Nobuyuki; Ando, Masaki

    2017-06-12

    We propose a new method to optically levitate a macroscopic mirror with two vertical Fabry-Pérot cavities linearly aligned. This configuration gives the simplest possible optical levitation in which the number of laser beams used is the minimum of two. We demonstrate that reaching the standard quantum limit (SQL) of a displacement measurement with our system is feasible with current technology. The cavity geometry and the levitated mirror parameters are designed to ensure that the Brownian vibration of the mirror surface is smaller than the SQL. Our scheme provides a promising tool for testing macroscopic quantum mechanics.

  18. Optical levitation of a mirror for reaching the standard quantum limit

    NASA Astrophysics Data System (ADS)

    Michimura, Yuta; Kuwahara, Yuya; Ushiba, Takafumi; Matsumoto, Nobuyuki; Ando, Masaki

    2017-06-01

    We propose a new method to optically levitate a macroscopic mirror with two vertical Fabry-P{\\'e}rot cavities linearly aligned. This configuration gives the simplest possible optical levitation in which the number of laser beams used is the minimum of two. We demonstrate that reaching the standard quantum limit (SQL) of a displacement measurement with our system is feasible with current technology. The cavity geometry and the levitated mirror parameters are designed to ensure that the Brownian vibration of the mirror surface is smaller than the SQL. Our scheme provides a promising tool for testing macroscopic quantum mechanics.

  19. Electronic Resource Management and Design

    ERIC Educational Resources Information Center

    Abrams, Kimberly R.

    2015-01-01

    We have now reached a tipping point at which electronic resources comprise more than half of academic library budgets. Because of the increasing work associated with the ever-increasing number of e-resources, there is a trend to distribute work throughout the library even in the presence of an electronic resources department. In 2013, the author…

  20. Site Remediation Technology InfoBase: A Guide to Federal Programs, Information Resources, and Publications on Contaminated Site Cleanup Technologies. First Edition

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Table of Contents: Federal Cleanup Programs; Federal Site Remediation Technology Development Assistance Programs; Federal Site Remediation Technology Development Electronic Data Bases; Federal Electronic Resources for Site Remediation Technology Information; Other Electronic Resources for Site Remediation Technology Information; Other Electronic Resources for Site Remediation Technology Information; Selected Bibliography: Federal Publication on Alternative and Innovative Site Remediation; and Appendix: Technology Program Contacts.

  1. The ALICE Electronic Logbook

    NASA Astrophysics Data System (ADS)

    Altini, V.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Divià, R.; Fuchs, U.; Makhlyueva, I.; Roukoutakis, F.; Schossmaier, K.; Soòs, C.; Vande Vyvre, P.; Von Haller, B.; ALICE Collaboration

    2010-04-01

    All major experiments need tools that provide a way to keep a record of the events and activities, both during commissioning and operations. In ALICE (A Large Ion Collider Experiment) at CERN, this task is performed by the Alice Electronic Logbook (eLogbook), a custom-made application developed and maintained by the Data-Acquisition group (DAQ). Started as a statistics repository, the eLogbook has evolved to become not only a fully functional electronic logbook, but also a massive information repository used to store the conditions and statistics of the several online systems. It's currently used by more than 600 users in 30 different countries and it plays an important role in the daily ALICE collaboration activities. This paper will describe the LAMP (Linux, Apache, MySQL and PHP) based architecture of the eLogbook, the database schema and the relevance of the information stored in the eLogbook to the different ALICE actors, not only for near real time procedures but also for long term data-mining and analysis. It will also present the web interface, including the different used technologies, the implemented security measures and the current main features. Finally it will present the roadmap for the future, including a migration to the web 2.0 paradigm, the handling of the database ever-increasing data volume and the deployment of data-mining tools.

  2. Patient and caregiver perspectives on decision support for symptom and quality of life management during cancer treatment: Implications for eHealth.

    PubMed

    Cooley, Mary E; Nayak, Manan M; Abrahm, Janet L; Braun, Ilana M; Rabin, Michael S; Brzozowski, Jane; Lathan, Christopher; Berry, Donna L

    2017-08-01

    Adequate symptom and quality-of-life (SQL) management is a priority during cancer treatment. eHealth is a timely way to enhance patient-engagement, facilitate communication, and improve health outcomes. The objectives of this study were to describe patient and caregivers' perspectives for providing, processing, and managing SQL data to enhance communication and identify desired components for decision support. Data were collected from 64 participants through questionnaires and focus groups. Analysis was conducted using NVivo. Open and axial coding was completed, grouping commonalities and large constructs into nodes to identify and synthesize themes. Face-to-face meetings with clinicians were the prime time to communicate, and patients strove to understand treatment options and the effect on SQL by bringing caregivers to their visits, taking notes, tracking symptoms, and creating portable health records. Patients/caregivers struggled to self-manage their symptoms and were uncertain when to contact clinicians when experiencing uncontrolled symptoms. Most participants identified eHealth solutions for decision support. However, 38% of participants (n = 24) rarely used computers and identified non-eHealth options for decision support. Core components for both eHealth and non-eHealth systems were access to (1) cancer information, (2) medical records, (3) peer support, and (4) improved support and understanding on when to contact clinicians. Patients were faced with an overwhelming amount of information and relied on their caregivers to help navigate the complexities of cancer care and self-manage SQL. Health technologies can provide informational support; however, decision support needs to span multiple venues to avoid increasing disparities caused by a digital divide. Copyright © 2017 John Wiley & Sons, Ltd.

  3. SU-E-T-255: Development of a Michigan Quality Assurance (MQA) Database for Clinical Machine Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, D

    Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visualmore » basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.« less

  4. An SQL query generator for CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Chirica, Laurian

    1990-01-01

    As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.

  5. PandASoft: Open Source Instructional Laboratory Administration Software

    NASA Astrophysics Data System (ADS)

    Gay, P. L.; Braasch, P.; Synkova, Y. N.

    2004-12-01

    PandASoft (Physics and Astronomy Software) is software for organizing and archiving a department's teaching resources and materials. An easy to use, secure interface allows faculty and staff to explore equipment inventories, see what laboratory experiments are available, find handouts, and track what has been used in different classes in the past. Divided into five sections: classes, equipment, laboratories, links, and media, its database cross links materials, allowing users to see what labs are used with which classes, what media and equipment are used with which labs, or simply what equipment is lurking in which room. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. It is designed to allow users to easily customize the headers, footers and colors to blend with existing sites - no programming experience required. While initial data input is labor intensive, the system will save time later by allowing users to quickly answer questions related to what is in inventory, where it is located, how many are in stock, and where online they can learn more. It will also provide a central location for storing PDFs of handouts, and links to applets and cool sites at other universities. PandASoft comes with over 100 links to online resources pre-installed. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing computers and resources for this project.

  6. Real-time Geographic Information System (GIS) for Monitoring the Area of Potential Water Level Using Rule Based System

    NASA Astrophysics Data System (ADS)

    Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro

    2018-02-01

    Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.

  7. Do GPs use electronic mental health resources? - a qualitative study.

    PubMed

    Austin, David; Pier, Ciaran; Mitchell, Joanna; Schattner, Peter; Wade, Victoria; Pierce, David; Klein, Britt

    2006-05-01

    The Better Outcomes in Mental Health Care (BOMHC) initiative encourages general practitioners to use electronic mental health resources (EMHRs) during consultation with patients requiring psychological assistance. However, there is little data on GPs' acceptance and use of EMHRs. Semistructured interviews were conducted with 27 GPs to determine their attitude toward EMHRs, and their use during consultation with patients. Few GPs reported frequently using EMHRs in consultation. Identified barriers to use included lack of familiarity with information technology, and insufficient knowledge of available resources. Identified advantages of electronic resources included high patient acceptance, time efficiency, and improved quality of information. General practitioners recognise several advantages of utilising electronic resources for managing patients with mental illness. However, GPs are not sufficiently familiar with electronic resources to use them effectively. This could be overcome by education.

  8. Making sense of the electronic resource marketplace: trends in health-related electronic resources.

    PubMed Central

    Blansit, B D; Connor, E

    1999-01-01

    Changes in the practice of medicine and technological developments offer librarians unprecedented opportunities to select and organize electronic resources, use the Web to deliver content throughout the organization, and improve knowledge at the point of need. The confusing array of available products, access routes, and pricing plans makes it difficult to anticipate the needs of users, identify the top resources, budget effectively, make sound collection management decisions, and organize the resources effectively and seamlessly. The electronic resource marketplace requires much vigilance, considerable patience, and continuous evaluation. There are several strategies that librarians can employ to stay ahead of the electronic resource curve, including taking advantage of free trials from publishers; marketing free trials and involving users in evaluating new products; watching and testing products marketed to the clientele; agreeing to beta test new products and services; working with aggregators or republishers; joining vendor advisory boards; benchmarking institutional resources against five to eight competitors; and forming or joining a consortium for group negotiating and purchasing. This article provides a brief snapshot of leading biomedical resources; showcases several libraries that have excelled in identifying, acquiring, and organizing electronic resources; and discusses strategies and trends of potential interest to biomedical librarians, especially those working in hospital settings. PMID:10427421

  9. Making sense of the electronic resource marketplace: trends in health-related electronic resources.

    PubMed

    Blansit, B D; Connor, E

    1999-07-01

    Changes in the practice of medicine and technological developments offer librarians unprecedented opportunities to select and organize electronic resources, use the Web to deliver content throughout the organization, and improve knowledge at the point of need. The confusing array of available products, access routes, and pricing plans makes it difficult to anticipate the needs of users, identify the top resources, budget effectively, make sound collection management decisions, and organize the resources effectively and seamlessly. The electronic resource marketplace requires much vigilance, considerable patience, and continuous evaluation. There are several strategies that librarians can employ to stay ahead of the electronic resource curve, including taking advantage of free trials from publishers; marketing free trials and involving users in evaluating new products; watching and testing products marketed to the clientele; agreeing to beta test new products and services; working with aggregators or republishers; joining vendor advisory boards; benchmarking institutional resources against five to eight competitors; and forming or joining a consortium for group negotiating and purchasing. This article provides a brief snapshot of leading biomedical resources; showcases several libraries that have excelled in identifying, acquiring, and organizing electronic resources; and discusses strategies and trends of potential interest to biomedical librarians, especially those working in hospital settings.

  10. Reproducing a Prospective Clinical Study as a Computational Retrospective Study in MIMIC-II.

    PubMed

    Kury, Fabrício S P; Huser, Vojtech; Cimino, James J

    2015-01-01

    In this paper we sought to reproduce, as a computational retrospective study in an EHR database (MIMIC-II), a recent large prospective clinical study: the 2013 publication, by the Japanese Association for Acute Medicine (JAAM), about disseminated intravascular coagulation, in the journal Critical Care (PMID: 23787004). We designed in SQL and Java a set of electronic phenotypes that reproduced the study's data sampling, and used R to perform the same statistical inference procedures. All produced source code is available online at https://github.com/fabkury/paamia2015. Our program identified 2,257 eligible patients in MIMIC-II, and the results remarkably agreed with the prospective study. A minority of the needed data elements was not found in MIMIC-II, and statistically significant inferences were possible in the majority of the cases.

  11. IntegratedMap: a Web interface for integrating genetic map data.

    PubMed

    Yang, Hongyu; Wang, Hongyu; Gingle, Alan R

    2005-05-01

    IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp

  12. Simple Web-based interactive key development software (WEBiKEY) and an example key for Kuruna (Poaceae: Bambusoideae).

    PubMed

    Attigala, Lakshmi; De Silva, Nuwan I; Clark, Lynn G

    2016-04-01

    Programs that are user-friendly and freely available for developing Web-based interactive keys are scarce and most of the well-structured applications are relatively expensive. WEBiKEY was developed to enable researchers to easily develop their own Web-based interactive keys with fewer resources. A Web-based multiaccess identification tool (WEBiKEY) was developed that uses freely available Microsoft ASP.NET technologies and an SQL Server database for Windows-based hosting environments. WEBiKEY was tested for its usability with a sample data set, the temperate woody bamboo genus Kuruna (Poaceae). WEBiKEY is freely available to the public and can be used to develop Web-based interactive keys for any group of species. The interactive key we developed for Kuruna using WEBiKEY enables users to visually inspect characteristics of Kuruna and identify an unknown specimen as one of seven possible species in the genus.

  13. OxfordGrid: a web interface for pairwise comparative map views.

    PubMed

    Yang, Hongyu; Gingle, Alan R

    2005-12-01

    OxfordGrid is a web application and database schema for storing and interactively displaying genetic map data in a comparative, dot-plot, fashion. Its display is composed of a matrix of cells, each representing a pairwise comparison of mapped probe data for two linkage groups or chromosomes. These are arranged along the axes with one forming grid columns and the other grid rows with the degree and pattern of synteny/colinearity between the two linkage groups manifested in the cell's dot density and structure. A mouse click over the selected grid cell launches an image map-based display for the selected cell. Both individual and linear groups of mapped probes can be selected and displayed. Also, configurable links can be used to access other web resources for mapped probe information. OxfordGrid is implemented in C#/ASP.NET and the package, including MySQL schema creation scripts, is available at ftp://cggc.agtec.uga.edu/OxfordGrid/.

  14. Data Service: Distributed Data Capture and Replication

    NASA Astrophysics Data System (ADS)

    Warner, P. B.; Pietrowicz, S. R.

    2007-10-01

    Data Service is a critical component of the NOAO Data Management and Science Support (DMaSS) Solutions Platform, which is based on a service-oriented architecture, and is to replace the current NOAO Data Transport System. Its responsibilities include capturing data from NOAO and partner telescopes and instruments and replicating the data across multiple (currently six) storage sites. Java 5 was chosen as the implementation language, and Java EE as the underlying enterprise framework. Application metadata persistence is performed using EJB and Hibernate on the JBoss Application Server, with PostgreSQL as the persistence back-end. Although potentially any underlying mass storage system may be used as the Data Service file persistence technology, DTS deployments and Data Service test deployments currently use the Storage Resource Broker from SDSC. This paper presents an overview and high-level design of the Data Service, including aspects of deployment, e.g., for the LSST Data Challenge at the NCSA computing facilities.

  15. PsychVACS: a system for asynchronous telepsychiatry.

    PubMed

    Odor, Alberto; Yellowlees, Peter; Hilty, Donald; Parish, Michelle Burke; Nafiz, Najia; Iosif, Ana-Maria

    2011-05-01

    To describe the technical development of an asynchronous telepsychiatry application, the Psychiatric Video Archiving and Communication System. A client-server application was developed in Visual Basic.Net with Microsoft(®) SQL database as the backend. It includes the capability of storing video-recorded psychiatric interviews and manages the workflow of the system with automated messaging. Psychiatric Video Archiving and Communication System has been used to conduct the first ever series of asynchronous telepsychiatry consultations worldwide. A review of the software application and the process as part of this project has led to a number of improvements that are being implemented in the next version, which is being written in Java. This is the first description of the use of video recorded data in an asynchronous telemedicine application. Primary care providers and consulting psychiatrists have found it easy to work with and a valuable resource to increase the availability of psychiatric consultation in remote rural locations.

  16. UCbase 2.0: ultraconserved sequences database (2014 update)

    PubMed Central

    Lomonaco, Vincenzo; Martoglia, Riccardo; Mandreoli, Federica; Anderlucci, Laura; Emmett, Warren; Bicciato, Silvio; Taccioli, Cristian

    2014-01-01

    UCbase 2.0 (http://ucbase.unimore.it) is an update, extension and evolution of UCbase, a Web tool dedicated to the analysis of ultraconserved sequences (UCRs). UCRs are 481 sequences >200 bases sharing 100% identity among human, mouse and rat genomes. They are frequently located in genomic regions known to be involved in cancer or differentially expressed in human leukemias and carcinomas. UCbase 2.0 is a platform-independent Web resource that includes the updated version of the human genome annotation (hg19), information linking disorders to chromosomal coordinates based on the Systematized Nomenclature of Medicine classification, a query tool to search for Single Nucleotide Polymorphisms (SNPs) and a new text box to directly interrogate the database using a MySQL interface. To facilitate the interactive visual interpretation of UCR chromosomal positioning, UCbase 2.0 now includes a graph visualization interface directly linked to UCSC genome browser. Database URL: http://ucbase.unimore.it PMID:24951797

  17. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    NASA Astrophysics Data System (ADS)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  18. MSDB: A Comprehensive Database of Simple Sequence Repeats

    PubMed Central

    Avvaru, Akshay Kumar; Saxena, Saketh; Mishra, Rakesh Kumar

    2017-01-01

    Abstract Microsatellites, also known as Simple Sequence Repeats (SSRs), are short tandem repeats of 1–6 nt motifs present in all genomes, particularly eukaryotes. Besides their usefulness as genome markers, SSRs have been shown to perform important regulatory functions, and variations in their length at coding regions are linked to several disorders in humans. Microsatellites show a taxon-specific enrichment in eukaryotic genomes, and some may be functional. MSDB (Microsatellite Database) is a collection of >650 million SSRs from 6,893 species including Bacteria, Archaea, Fungi, Plants, and Animals. This database is by far the most exhaustive resource to access and analyze SSR data of multiple species. In addition to exploring data in a customizable tabular format, users can view and compare the data of multiple species simultaneously using our interactive plotting system. MSDB is developed using the Django framework and MySQL. It is freely available at http://tdb.ccmb.res.in/msdb. PMID:28854643

  19. A web accessible resource for investigating cassava phenomics and genomics information: BIOGEN BASE

    PubMed Central

    Jayakodi, Murukarthick; selvan, Sreedevi Ghokhilamani; Natesan, Senthil; Muthurajan, Raveendran; Duraisamy, Raghu; Ramineni, Jana Jeevan; Rathinasamy, Sakthi Ambothi; Karuppusamy, Nageswari; Lakshmanan, Pugalenthi; Chokkappan, Mohan

    2011-01-01

    The goal of our research is to establish a unique portal to bring out the potential outcome of the research in the Casssava crop. The Biogen base for cassava clearly brings out the variations of different traits of the germplasms, maintained at the Tapioca and Castor Research Station, Tamil Nadu Agricultural University. Phenotypic and genotypic variations of the accessions are clearly depicted, for the users to browse and interpret the variations using the microsatellite markers. Database (BIOGEN BASE ‐ CASSAVA) is designed using PHP and MySQL and is equipped with extensive search options. It is more user-friendly and made publicly available, to improve the research and development of cassava by making a wealth of genetics and genomics data available through open, common, and worldwide forum for all individuals interested in the field. Availability The database is available for free at http://www.tnaugenomics.com/biogenbase/casava.php PMID:21904428

  20. A web accessible resource for investigating cassava phenomics and genomics information: BIOGEN BASE.

    PubMed

    Jayakodi, Murukarthick; Selvan, Sreedevi Ghokhilamani; Natesan, Senthil; Muthurajan, Raveendran; Duraisamy, Raghu; Ramineni, Jana Jeevan; Rathinasamy, Sakthi Ambothi; Karuppusamy, Nageswari; Lakshmanan, Pugalenthi; Chokkappan, Mohan

    2011-01-01

    The goal of our research is to establish a unique portal to bring out the potential outcome of the research in the Casssava crop. The Biogen base for cassava clearly brings out the variations of different traits of the germplasms, maintained at the Tapioca and Castor Research Station, Tamil Nadu Agricultural University. Phenotypic and genotypic variations of the accessions are clearly depicted, for the users to browse and interpret the variations using the microsatellite markers. Database (BIOGEN BASE - CASSAVA) is designed using PHP and MySQL and is equipped with extensive search options. It is more user-friendly and made publicly available, to improve the research and development of cassava by making a wealth of genetics and genomics data available through open, common, and worldwide forum for all individuals interested in the field. The database is available for free at http://www.tnaugenomics.com/biogenbase/casava.php.

  1. BIRS - Bioterrorism Information Retrieval System.

    PubMed

    Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar

    2013-01-01

    Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. The database is freely available at http://www.bioterrorism.biowaves.org.

  2. VizieR Data Extraction Disseminated through Widgets

    NASA Astrophysics Data System (ADS)

    Landais, G.; Boch, T.; Ochsenbein, F.; Simon, A.-C.

    2015-09-01

    The CDS widgets are a collection of web applications easily embeddable in web pages. The Apache Shindig framework, relying on OpenSocial specification, enables to reuse code in any web page by giving interactive output and broadcasting capabilities: for instance to use the result of a search widget to populate other widgets. Some of these widgets are already used in the VizieR web application. The “plot widget” is used to illustrate associated data like time-series or spectra coming from publications. The data extracted with a SQL-like language (which can operate with different type of resources like FITS, ASCII files, etc.) are then disseminated in a “plot widge” that is ergonomic and contains evolved customization capabilities. The VizieR photometry viewer is the result of filter gathering and pipeline automatization: the interface use a dedicated widget that integrates three linked views: a photometry point, a sky chart and the VizieR tabular data.

  3. Development of a forestry government agency enterprise GIS system: a disconnected editing approach

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Barber, Brad L.

    2008-10-01

    The Texas Forest Service (TFS) has developed a geographic information system (GIS) for use by agency personnel in central Texas for managing oak wilt suppression and other landowner assistance programs. This Enterprise GIS system was designed to support multiple concurrent users accessing shared information resources. The disconnected editing approach was adopted in this system to avoid the overhead of maintaining an active connection between TFS central Texas field offices and headquarters since most field offices are operating with commercially provided Internet service. The GIS system entails maintaining a personal geodatabase on each local field office computer. Spatial data from the field is periodically up-loaded into a central master geodatabase stored in a Microsoft SQL Server at the TFS headquarters in College Station through the ESRI Spatial Database Engine (SDE). This GIS allows users to work off-line when editing data and requires connecting to the central geodatabase only when needed.

  4. Insect barcode information system.

    PubMed

    Pratheepa, Maria; Jalali, Sushil Kumar; Arokiaraj, Robinson Silvester; Venkatesan, Thiruvengadam; Nagesh, Mandadi; Panda, Madhusmita; Pattar, Sharath

    2014-01-01

    Insect Barcode Information System called as Insect Barcode Informática (IBIn) is an online database resource developed by the National Bureau of Agriculturally Important Insects, Bangalore. This database provides acquisition, storage, analysis and publication of DNA barcode records of agriculturally important insects, for researchers specifically in India and other countries. It bridges a gap in bioinformatics by integrating molecular, morphological and distribution details of agriculturally important insects. IBIn was developed using PHP/My SQL by using relational database management concept. This database is based on the client- server architecture, where many clients can access data simultaneously. IBIn is freely available on-line and is user-friendly. IBIn allows the registered users to input new information, search and view information related to DNA barcode of agriculturally important insects.This paper provides a current status of insect barcode in India and brief introduction about the database IBIn. http://www.nabg-nbaii.res.in/barcode.

  5. Analysis of Human Resources Management Strategy in China Electronic Commerce Enterprises

    NASA Astrophysics Data System (ADS)

    Shao, Fang

    The paper discussed electronic-commerce's influence on enterprise human resources management, proposed and proved the human resources management strategy which electronic commerce enterprise should adopt from recruitment strategy to training strategy, keeping talent strategy and other ways.

  6. Concept locator: a client-server application for retrieval of UMLS metathesaurus concepts through complex boolean query.

    PubMed

    Nadkarni, P M

    1997-08-01

    Concept Locator (CL) is a client-server application that accesses a Sybase relational database server containing a subset of the UMLS Metathesaurus for the purpose of retrieval of concepts corresponding to one or more query expressions supplied to it. CL's query grammar permits complex Boolean expressions, wildcard patterns, and parenthesized (nested) subexpressions. CL translates the query expressions supplied to it into one or more SQL statements that actually perform the retrieval. The generated SQL is optimized by the client to take advantage of the strengths of the server's query optimizer, and sidesteps its weaknesses, so that execution is reasonably efficient.

  7. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  8. GEOMAGIA50: An archeointensity database with PHP and MySQL

    NASA Astrophysics Data System (ADS)

    Korhonen, K.; Donadini, F.; Riisager, P.; Pesonen, L. J.

    2008-04-01

    The GEOMAGIA50 database stores 3798 archeomagnetic and paleomagnetic intensity determinations dated to the past 50,000 years. It also stores details of the measurement setup for each determination, which are used for ranking the data according to prescribed reliability criteria. The ranking system aims to alleviate the data reliability problem inherent in this kind of data. GEOMAGIA50 is based on two popular open source technologies. The MySQL database management system is used for storing the data, whereas the functionality and user interface are provided by server-side PHP scripts. This technical brief gives a detailed description of GEOMAGIA50 from a technical viewpoint.

  9. The MAO NASU Plate Archive Database. Current Status and Perspectives

    NASA Astrophysics Data System (ADS)

    Pakuliak, L. K.; Sergeeva, T. P.

    2006-04-01

    The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.

  10. Supporting temporal queries on clinical relational databases: the S-WATCH-QL language.

    PubMed Central

    Combi, C.; Missora, L.; Pinciroli, F.

    1996-01-01

    Due to the ubiquitous and special nature of time, specially in clinical datábases there's the need of particular temporal data and operators. In this paper we describe S-WATCH-QL (Structured Watch Query Language), a temporal extension of SQL, the widespread query language based on the relational model. S-WATCH-QL extends the well-known SQL by the addition of: a) temporal data types that allow the storage of information with different levels of granularity; b) historical relations that can store together both instantaneous valid times and intervals; c) some temporal clauses, functions and predicates allowing to define complex temporal queries. PMID:8947722

  11. Using career ladders to motivate and retain employees: an implementation success story.

    PubMed

    Garletts, Joseph A

    2002-01-01

    In October 2000, Phoenix-based Sonora Quest Laboratories, LLC (SQL), commissioned The Gelfond Group to survey SQL employees. Responding to negative survey scores, SQL developed and implemented an entry-level career ladder for line staff of the specimen management/referral testing department. The program was piloted in February 2001, and was implemented fully shortly thereafter. The ladder was designed to provide job enrichment opportunities through company-conducted training and advancement provisions. It contained requirements for productivity and quality of work performed in addition to increasingly rigorous training and competency documentation. Employees were accountable for their own advancement and for ensuring that all documentation was complete. Advancement was automatic once requirements were completed. Pay increases accompanied each advancement on a predetermined scale. At the end of 12 months, employee turnover dropped from 39% to less than 20% annually. Both productivity and morale improved, and results on a second employee survey indicated dramatic improvement in five key areas. The career ladder concept has been replicated successfully in several other departments, including phlebotomy, and a six-tiered ladder is under development for the clinical laboratory. It will encompass CLA, MLT, and MT positions from entry level to technical coordinator.

  12. Mining the SDSS SkyServer SQL queries log

    NASA Astrophysics Data System (ADS)

    Hirota, Vitor M.; Santos, Rafael; Raddick, Jordan; Thakar, Ani

    2016-05-01

    SkyServer, the Internet portal for the Sloan Digital Sky Survey (SDSS) astronomic catalog, provides a set of tools that allows data access for astronomers and scientific education. One of SkyServer data access interfaces allows users to enter ad-hoc SQL statements to query the catalog. SkyServer also presents some template queries that can be used as basis for more complex queries. This interface has logged over 330 million queries submitted since 2001. It is expected that analysis of this data can be used to investigate usage patterns, identify potential new classes of queries, find similar queries, etc. and to shed some light on how users interact with the Sloan Digital Sky Survey data and how scientists have adopted the new paradigm of e-Science, which could in turn lead to enhancements on the user interfaces and experience in general. In this paper we review some approaches to SQL query mining, apply the traditional techniques used in the literature and present lessons learned, namely, that the general text mining approach for feature extraction and clustering does not seem to be adequate for this type of data, and, most importantly, we find that this type of analysis can result in very different queries being clustered together.

  13. NVST Data Archiving System Based On FastBit NoSQL Database

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo

    2014-06-01

    The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

  14. Integration in PACS of DICOM with TCP/IP, SQL, and X Windows

    NASA Astrophysics Data System (ADS)

    Reijns, Gerard L.

    1994-05-01

    The DICOM standard (Digital Imaging and Communications in Medicine) has been developed in order to obtain compatibility at the higher OSI levels. This paper describes the implementation of DICOM into our developed low cost PACS, which uses as much as possible standard software and standard protocols such as SQL, X Windows and TCP/IP. We adopted the requirement that all messages on the communication network have to be DICOM compatible. The translation between DICOM messages and SQL commands, which drive the relational data base, has been accommodated in our PACS supervisor. The translation takes only between 10 and 20 milliseconds. Images, that will be used the current day are stored in a distributed, parallel operating image base for reasons of performance. Extensive use has been made of X Windows to visualize images. A maximum of 12 images can simultaneously be displayed, of which one selected image can be manipulated (e.g., magnified, rotated, etc.), without affecting the other displayed images. The emphasis of the future work will be on performance measurements and modeling of our PACS and bringing the results of both methods in agreement with each other.

  15. Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System

    PubMed Central

    Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail

    1988-01-01

    This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.

  16. Implementing CORAL: An Electronic Resource Management System

    ERIC Educational Resources Information Center

    Whitfield, Sharon

    2011-01-01

    A 2010 electronic resource management survey conducted by Maria Collins of North Carolina State University and Jill E. Grogg of University of Alabama Libraries found that the top six electronic resources management priorities included workflow management, communications management, license management, statistics management, administrative…

  17. Ensembl variation resources

    PubMed Central

    2010-01-01

    Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org. PMID:20459805

  18. Electronic Library: A TERI Experiment.

    ERIC Educational Resources Information Center

    Kar, Debal C.; Deb, Subrata; Kumar, Satish

    2003-01-01

    Discusses the development of Electronic Library at TERI (The Energy and Resources Institute, New Delhi). Highlights include: hardware and software used; the digital library/Virtual Electronic Library; directory of Internet journals; virtual reference resources; electronic collection/Physical Electronic Library; downloaded online full-length…

  19. Autophagy Regulatory Network - a systems-level bioinformatics resource for studying the mechanism and regulation of autophagy.

    PubMed

    Türei, Dénes; Földvári-Nagy, László; Fazekas, Dávid; Módos, Dezső; Kubisch, János; Kadlecsik, Tamás; Demeter, Amanda; Lenti, Katalin; Csermely, Péter; Vellai, Tibor; Korcsmáros, Tamás

    2015-01-01

    Autophagy is a complex cellular process having multiple roles, depending on tissue, physiological, or pathological conditions. Major post-translational regulators of autophagy are well known, however, they have not yet been collected comprehensively. The precise and context-dependent regulation of autophagy necessitates additional regulators, including transcriptional and post-transcriptional components that are listed in various datasets. Prompted by the lack of systems-level autophagy-related information, we manually collected the literature and integrated external resources to gain a high coverage autophagy database. We developed an online resource, Autophagy Regulatory Network (ARN; http://autophagy-regulation.org), to provide an integrated and systems-level database for autophagy research. ARN contains manually curated, imported, and predicted interactions of autophagy components (1,485 proteins with 4,013 interactions) in humans. We listed 413 transcription factors and 386 miRNAs that could regulate autophagy components or their protein regulators. We also connected the above-mentioned autophagy components and regulators with signaling pathways from the SignaLink 2 resource. The user-friendly website of ARN allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. ARN has the potential to facilitate the experimental validation of novel autophagy components and regulators. In addition, ARN helps the investigation of transcription factors, miRNAs and signaling pathways implicated in the control of the autophagic pathway. The list of such known and predicted regulators could be important in pharmacological attempts against cancer and neurodegenerative diseases.

  20. Lightweight Data Systems in the Cloud: Costs, Benefits and Best Practices

    NASA Astrophysics Data System (ADS)

    Fatland, R.; Arendt, A. A.; Howe, B.; Hess, N. J.; Futrelle, J.

    2015-12-01

    We present here a simple analysis of both the cost and the benefit of using the cloud in environmental science circa 2016. We present this set of ideas to enable the potential 'cloud adopter' research scientist to explore and understand the tradeoffs in moving some aspect of their compute work to the cloud. We present examples, design patterns and best practices as an evolving body of knowledge that help optimize benefit to the research team. Thematically this generally means not starting from a blank page but rather learning how to find 90% of the solution to a problem pre-built. We will touch on four topics of interest. (1) Existing cloud data resources (NASA, WHOI BCO DMO, etc) and how they can be discovered, used and improved. (2) How to explore, compare and evaluate cost and compute power from many cloud options, particularly in relation to data scale (size/complexity). (3) What are simple / fast 'Lightweight Data System' procedures that take from 20 minutes to one day to implement and that have a clear immediate payoff in environmental data-driven research. Examples include publishing a SQL Share URL at (EarthCube's) CINERGI as a registered data resource and creating executable papers on a cloud-hosted Jupyter instance, particularly iPython notebooks. (4) Translating the computational terminology landscape ('cloud', 'HPC cluster', 'hadoop', 'spark', 'machine learning') into examples from the community of practice to help the geoscientist build or expand their mental map. In the course of this discussion -- which is about resource discovery, adoption and mastery -- we provide direction to online resources in support of these themes.

  1. Development of Electronic Resources across Networks in Thailand.

    ERIC Educational Resources Information Center

    Ratchatavorn, Phandao

    2002-01-01

    Discusses the development of electronic resources across library networks in Thailand to meet user needs, particularly electronic journals. Topics include concerns about journal access; limited budgets for library acquisitions of journals; and sharing resources through a centralized database system that allows Web access to journals via Internet…

  2. Electronic Resource Management 2.0: Using Web 2.0 Technologies as Cost-Effective Alternatives to an Electronic Resource Management System

    ERIC Educational Resources Information Center

    Murray, Adam

    2008-01-01

    Designed to assist with the management of e-resources, electronic resource management (ERM) systems are time- and fund-consuming to purchase and maintain. Questions of system compatibility, data population, and workflow design/redesign can be difficult to answer; sometimes those answers are not what we'd prefer to hear. The two primary functions…

  3. Shaping the Electronic Library--The UW-Madison Approach.

    ERIC Educational Resources Information Center

    Dean, Charles W., Ed.; Frazier, Ken; Pope, Nolan F.; Gorman, Peter C.; Dentinger, Sue; Boston, Jeanne; Phillips, Hugh; Daggett, Steven C.; Lundquist, Mitch; McClung, Mark; Riley, Curran; Allan, Craig; Waugh, David

    1998-01-01

    This special theme section describes the University of Wisconsin-Madison's experience building its Electronic Library. Highlights include integrating resources and services; the administrative framework; the public electronic library, including electronic publishing capability and access to World Wide Web-based and other electronic resources;…

  4. Building and Managing Electronic Resources in Digital Era in India with Special Reference to IUCAA and NIV, Pune: A Comparative Case Study

    NASA Astrophysics Data System (ADS)

    Sahu, H. K.; Singh, S. N.

    2015-04-01

    This paper discusses and presents a comparative case study of two libraries in Pune, India, Inter-University Centre for Astronomy and Astrophysics and Information Centre and Library of National Institute of Virology (Indian Council of Medical Research). It compares how both libraries have managed their e-resource collections, including acquisitions, subscriptions, and consortia arrangements, while also developing a collection of their own resources, including pre-prints and publications, video lectures, and other materials in an institutional repository. This study illustrates how difficult it is to manage electronic resources in a developing country like India, even though electronic resources are used more than print resources. Electronic resource management can be daunting, but with a systematic approach, various problems can be solved, and use of the materials will be enhanced.

  5. Evaluation of Sub Query Performance in SQL Server

    NASA Astrophysics Data System (ADS)

    Oktavia, Tanty; Sujarwo, Surya

    2014-03-01

    The paper explores several sub query methods used in a query and their impact on the query performance. The study uses experimental approach to evaluate the performance of each sub query methods combined with indexing strategy. The sub query methods consist of in, exists, relational operator and relational operator combined with top operator. The experimental shows that using relational operator combined with indexing strategy in sub query has greater performance compared with using same method without indexing strategy and also other methods. In summary, for application that emphasized on the performance of retrieving data from database, it better to use relational operator combined with indexing strategy. This study is done on Microsoft SQL Server 2012.

  6. Version 1.00 programmer`s tools used in constructing the INEL RML/analytical radiochemistry sample tracking database and its user interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Femec, D.A.

    This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases.

  7. Developing Humanities Collections in the Digital Age: Exploring Humanities Faculty Engagement with Electronic and Print Resources

    ERIC Educational Resources Information Center

    Kachaluba, Sarah Buck; Brady, Jessica Evans; Critten, Jessica

    2014-01-01

    This article is based on quantitative and qualitative research examining humanities scholars' understandings of the advantages and disadvantages of print versus electronic information resources. It explores how humanities' faculty members at Florida State University (FSU) use print and electronic resources, as well as how they perceive these…

  8. Using a Decision Grid Process to Build Consensus in Electronic Resources Cancellation Decisions

    ERIC Educational Resources Information Center

    Foudy, Gerri; McManus, Alesia

    2005-01-01

    Many libraries are expending an increasing part of their collections budgets on electronic resources. At the same time many libraries, especially those which are state funded, face diminishing budgets and high rates of inflation for serials subscriptions in all formats, including electronic resources. Therefore, many libraries need to develop ways…

  9. Clinical Databases for Chest Physicians.

    PubMed

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  10. Wireless data collection of self-administered surveys using tablet computers.

    PubMed

    Singleton, Kyle W; Lan, Mars; Arnold, Corey; Vahidi, Mani; Arangua, Lisa; Gelberg, Lillian; Bui, Alex A T

    2011-01-01

    The accurate and expeditious collection of survey data by coordinators in the field is critical in the support of research studies. Early methods that used paper documentation have slowly evolved into electronic capture systems. Indeed, tools such as REDCap and others illustrate this transition. However, many current systems are tailored web-browsers running on desktop/laptop computers, requiring keyboard and mouse input. We present a system that utilizes a touch screen interface running on a tablet PC with consideration for portability, limited screen space, wireless connectivity, and potentially inexperienced and low literacy users. The system was developed using C#, ASP.net, and SQL Server by multiple programmers over the course of a year. The system was developed in coordination with UCLA Family Medicine and is currently deployed for the collection of data in a group of Los Angeles area clinics of community health centers for a study on drug addiction and intervention.

  11. Creation of the First French Database in Primary Care Using the ICPC2: Feasibility Study.

    PubMed

    Lacroix-Hugues, V; Darmon, D; Pradier, C; Staccini, P

    2017-01-01

    The objective of our study was to assess the feasibility of gathering data stored in primary care Electronic Health records (EHRs) in order to create a research database (PRIMEGE PACA project). The software for EHR models of two office and patient data management systems were analyzed; anonymized data was extracted and imported into a MySQL database. An ETL procedure to code text in ICPC2 codes was implemented. Eleven general practitioners (GPs) were enrolled as "data producers" and data were extracted from 2012 to 2015. In this paper, we explain the ways to make this process feasible as well as illustrate its utility for estimating epidemiological indicators and professional practice assessments. Other software is currently being analyzed for integration and expansion of this panel of GPs. This experimentation is recognized as a robust framework and is considered to be the technical foundation of the first regional observatory of primary care data.

  12. Medical record management systems: criticisms and new perspectives.

    PubMed

    Frénot, S; Laforest, F

    1999-06-01

    The first generation of computerized medical records stored the data as text, but these records did not bring any improvement in information manipulation. The use of a relational database management system (DBMS) has largely solved this problem as it allows for data requests by using SQL. However, this requires data structuring which is not very appropriate to medicine. Moreover, the use of templates and icon user interfaces has introduced a deviation from the paper-based record (still existing). The arrival of hypertext user interfaces has proven to be of interest to fill the gap between the paper-based medical record and its electronic version. We think that further improvement can be accomplished by using a fully document-based system. We present the architecture, advantages and disadvantages of classical DBMS-based and Web/DBMS-based solutions. We also present a document-based solution and explain its advantages, which include communication, security, flexibility and genericity.

  13. Real-Time Discovery Services over Large, Heterogeneous and Complex Healthcare Datasets Using Schema-Less, Column-Oriented Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Begoli, Edmon; Dunning, Ted; Charlie, Frasure

    We present a service platform for schema-leess exploration of data and discovery of patient-related statistics from healthcare data sets. The architecture of this platform is motivated by the need for fast, schema-less, and flexible approaches to SQL-based exploration and discovery of information embedded in the common, heterogeneously structured healthcare data sets and supporting components (electronic health records, practice management systems, etc.) The motivating use cases described in the paper are clinical trials candidate discovery, and a treatment effectiveness analysis. Following the use cases, we discuss the key features and software architecture of the platform, the underlying core components (Apache Parquet,more » Drill, the web services server), and the runtime profiles and performance characteristics of the platform. We conclude by showing dramatic speedup with some approaches, and the performance tradeoffs and limitations of others.« less

  14. T-LECS: The Control Software System for MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, T.; Omata, K.; Konishi, M.; Ichikawa, T.; Suzuki, R.; Tokoku, C.; Katsuno, Y.; Nishimura, T.

    2006-07-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru Telescope. We present the system design of the control software system for MOIRCS, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS is a PC-Linux based network distributed system. Two PCs equipped with the focal plane array system operate two HAWAII2 detectors, respectively, and another PC is used for user interfaces and a database server. Moreover, these PCs control various devices for observations distributed on a TCP/IP network. T-LECS has three interfaces; interfaces to the devices and two user interfaces. One of the user interfaces is to the integrated observation control system (Subaru Observation Software System) for observers, and another one provides the system developers the direct access to the devices of MOIRCS. In order to help the communication between these interfaces, we employ an SQL database system.

  15. Wireless Data Collection of Self-administered Surveys using Tablet Computers

    PubMed Central

    Singleton, Kyle W.; Lan, Mars; Arnold, Corey; Vahidi, Mani; Arangua, Lisa; Gelberg, Lillian; Bui, Alex A.T.

    2011-01-01

    The accurate and expeditious collection of survey data by coordinators in the field is critical in the support of research studies. Early methods that used paper documentation have slowly evolved into electronic capture systems. Indeed, tools such as REDCap and others illustrate this transition. However, many current systems are tailored web-browsers running on desktop/laptop computers, requiring keyboard and mouse input. We present a system that utilizes a touch screen interface running on a tablet PC with consideration for portability, limited screen space, wireless connectivity, and potentially inexperienced and low literacy users. The system was developed using C#, ASP.net, and SQL Server by multiple programmers over the course of a year. The system was developed in coordination with UCLA Family Medicine and is currently deployed for the collection of data in a group of Los Angeles area clinics of community health centers for a study on drug addiction and intervention. PMID:22195187

  16. The utilization of poisons information resources in Australasia.

    PubMed

    Fountain, J S; Reith, D M; Holt, A

    2014-02-01

    To identify poisons information resources most commonly utilized by Australasian Emergency Department staff, and examine attitudes regarding the benefits and user experience of the electronic products used. A survey tool was mailed to six Emergency Departments each in New Zealand and Australia to be answered by medical and nursing staff. Eighty six (71.7%) responses were received from the 120 survey forms sent: 70 (81%) responders were medical staff, the remainder nursing. Electronic resources were the most accessed poisons information resource in New Zealand; Australians preferring discussion with a colleague; Poisons Information Centers were the least utilized resource in both countries. With regard to electronic resources, further differences were recognized between countries in: ease of access, ease of use, quality of information and quantity of information, with New Zealand better in all four themes. New Zealand ED staff favored electronic poisons information resources while Australians preferred discussion with a colleague. That Poisons Information Centers were the least utilized resource was surprising. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Find the fish: using PROC SQL to build a relational database

    USGS Publications Warehouse

    Fabrizio, Mary C.; Nelson, Scott N.

    1995-01-01

    Reliable estimates of abundance and survival, gained through mark-recapture studies, are necessary to better understand how to manage and restore lake trout populations in the Great Lakes. Working with a 24-year data set from a mark-recapture study conducted in Lake Superior, we attempted to disclose information on tag shedding by examining recaptures of double-tagged fish. The data set consisted of 64,288 observations on fish which had been marked with one or more tags; a subset of these fish had been marked with two tags at initial capture. Although DATA and PROC statements could be used to obtain some of the information we sought, these statements could not be used to extract a complete set of results from the double-tagging experiments. We therefore used SQL processing to create three tables representing the same information but in a fully normalized relational structure. In addition, we created indices to efficiently examine complex relationships among the individual capture records. This approach allowed us to obtain all the information necessary to estimate tag retention through subsequent modeling. We believe that our success with SQL was due in large part to its ability to simultaneosly scan the same table more than once and to permit consideration of other tables in sub-queries.

  18. Database constraints applied to metabolic pathway reconstruction tools.

    PubMed

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  19. Integrating Radar Image Data with Google Maps

    NASA Technical Reports Server (NTRS)

    Chapman, Bruce D.; Gibas, Sarah

    2010-01-01

    A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.

  20. Extending SQL to Support Privacy Policies

    NASA Astrophysics Data System (ADS)

    Ghazinour, Kambiz; Pun, Sampson; Majedi, Maryam; Chinaci, Amir H.; Barker, Ken

    Increasing concerns over Internet applications that violate user privacy by exploiting (back-end) database vulnerabilities must be addressed to protect both customer privacy and to ensure corporate strategic assets remain trustworthy. This chapter describes an extension onto database catalogues and Structured Query Language (SQL) for supporting privacy in Internet applications, such as in social networks, e-health, e-governmcnt, etc. The idea is to introduce new predicates to SQL commands to capture common privacy requirements, such as purpose, visibility, generalization, and retention for both mandatory and discretionary access control policies. The contribution is that corporations, when creating the underlying databases, will be able to define what their mandatory privacy policies arc with which all application users have to comply. Furthermore, each application user, when providing their own data, will be able to define their own privacy policies with which other users have to comply. The extension is supported with underlying catalogues and algorithms. The experiments demonstrate a very reasonable overhead for the extension. The result is a low-cost mechanism to create new systems that arc privacy aware and also to transform legacy databases to their privacy-preserving equivalents. Although the examples arc from social networks, one can apply the results to data security and user privacy of other enterprises as well.

  1. The development and application of electronic information system for safety administration of newborns in the rooming-in care.

    PubMed

    Wang, Fang; Dong, Jian-Cheng; Chen, Jian-Rong; Wu, Hui-Qun; Liu, Man-Hua; Xue, Li-Ly; Zhu, Xiang-Hua; Wang, Jian

    2015-01-01

    To independently research and develop an electronic information system for safety administration of newborns in the rooming-in care, and to investigate the effects of its clinical application. By VS 2010 SQL SERVER 2005 database and adopting Microsoft visual programming tool, an interactive mobile information system was established, with integrating data, information and knowledge with using information structures, information processes and information technology. From July 2011 to July 2012, totally 210 newborns from the rooming-in care of the Obstetrics Department of the Second Affiliated Hospital of Nantong University were chosen and randomly divided into two groups: the information system monitoring group (110 cases) and the regular monitoring group (100 cases). Incidence of abnormal events and degree of satisfaction were recorded and calculated. ① The wireless electronic information system has four main functions including risk scaling display, identity recognition display, nursing round notes board and health education board; ② statistically significant differences were found between the two groups both on the active or passive discovery rate of abnormal events occurred in the newborns (P<0.05) and the satisfaction degree of the mothers and their families (P<0.05); ③ the system was sensitive and reliable, and the wireless transmission of information was correct and safety. The system is with high practicability in the clinic and can ensure the safety for the newborns with improved satisfactions.

  2. Checklist Manifesto for Electronic Resources: Getting Ready for the Fiscal Year and Beyond

    ERIC Educational Resources Information Center

    England, Lenore; Fu, Li; Miller, Stephen

    2011-01-01

    Organization of electronic resources workflow is critical in the increasingly complicated and complex world of library management. A simple organizational tool that can be readily applied to electronic resources management (ERM) is the use of checklists. Based on the principles discussed in The Checklist Manifesto: How to Get Things Right, the…

  3. The VO-Dance web application at the IA2 data center

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Knapic, Cristina; Smareglia, Riccardo

    2012-09-01

    Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments, IA2 offers proprietary and public data access through user portals (both developed and mirrored) and deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing, along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards, the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from already existent database tables or views. The tool consists in a Java web application, potentially DBMS and platform independent, that stores internally the services' metadata and information, exposes restful endpoints to accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent with the published table or view. In response to the call VO-Dance translates back the database answer in a VO compliant way.

  4. E-waste management and resources recovery in France.

    PubMed

    Vadoudi, Kiyan; Kim, Junbeum; Laratte, Bertrand; Lee, Seung-Jin; Troussier, Nadège

    2015-10-01

    There are various issues of concern regarding electronic waste management, such as the toxicity of hazardous materials and the collection, recycling and recovery of useful resources. To understand the fate of electronic waste after collection and recycling, a products and materials flow analysis should be performed. This is a critical need, as material resources are becoming increasingly scarce and recycling may be able to provide secondary sources for new materials in the future. In this study, we investigate electronic waste systems, specifically the resource recovery or recycling aspects, as well as mapping electronic waste flows based on collection data in France. Approximately 1,588,453 t of new electrical and electronic equipment were sold in the French market in 2010. Of this amount, 430,000 t of electronic waste were collected, with the remaining 1,128,444 t remaining in stock. Furthermore, the total recycled amounts were 354,106 t and 11,396 t, respectively. The main electronic waste materials were ferrous metals (37%), plastic (22%), aluminium (12%), copper (11%) and glass (7%). This study will contribute to developing sustainable electronic waste and resource recycling systems in France. © The Author(s) 2015.

  5. An Array Library for Microsoft SQL Server with Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory project will use it to store galaxy simulation data.

  6. A Customizable Dashboarding System for Watershed Model Interpretation

    NASA Astrophysics Data System (ADS)

    Easton, Z. M.; Collick, A.; Wagena, M. B.; Sommerlot, A.; Fuka, D.

    2017-12-01

    Stakeholders, including policymakers, agricultural water managers, and small farm managers, can benefit from the outputs of commonly run watershed models. However, the information that each stakeholder needs is be different. While policy makers are often interested in the broader effects that small farm management may have on a watershed during extreme events or over long periods, farmers are often interested in field specific effects at daily or seasonal period. To provide stakeholders with the ability to analyze and interpret data from large scale watershed models, we have developed a framework that can support custom exploration of the large datasets produced. For the volume of data produced by these models, SQL-based data queries are not efficient; thus, we employ a "Not Only SQL" (NO-SQL) query language, which allows data to scale in both quantity and query volumes. We demonstrate a stakeholder customizable Dashboarding system that allows stakeholders to create custom `dashboards' to summarize model output specific to their needs. Dashboarding is a dynamic and purpose-based visual interface needed to display one-to-many database linkages so that the information can be presented for a single time period or dynamically monitored over time and allows a user to quickly define focus areas of interest for their analysis. We utilize a single watershed model that is run four times daily with a combined set of climate projections, which are then indexed, and added to an ElasticSearch datastore. ElasticSearch is a NO-SQL search engine built on top of Apache Lucene, a free and open-source information retrieval software library. Aligned with the ElasticSearch project is the open source visualization and analysis system, Kibana, which we utilize for custom stakeholder dashboarding. The dashboards create a visualization of the stakeholder selected analysis and can be extended to recommend robust strategies to support decision-making.

  7. 18 CFR 390.1 - Electronic registration.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Electronic registration. 390.1 Section 390.1 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY PROCEDURAL RULES ELECTRONIC REGISTRATION § 390.1 Electronic registration. Any person who...

  8. 18 CFR 390.1 - Electronic registration.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Electronic registration. 390.1 Section 390.1 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY PROCEDURAL RULES ELECTRONIC REGISTRATION § 390.1 Electronic registration. Any person who...

  9. 18 CFR 390.1 - Electronic registration.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Electronic registration. 390.1 Section 390.1 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY PROCEDURAL RULES ELECTRONIC REGISTRATION § 390.1 Electronic registration. Any person who...

  10. 18 CFR 390.1 - Electronic registration.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Electronic registration. 390.1 Section 390.1 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY PROCEDURAL RULES ELECTRONIC REGISTRATION § 390.1 Electronic registration. Any person who...

  11. 18 CFR 390.1 - Electronic registration.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Electronic registration. 390.1 Section 390.1 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY PROCEDURAL RULES ELECTRONIC REGISTRATION § 390.1 Electronic registration. Any person who...

  12. Footprint Representation of Planetary Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Walter, S. H. G.; Gasselt, S. V.; Michael, G.; Neukum, G.

    The geometric outline of remote sensing image data, the so called footprint, can be represented as a number of coordinate tuples. These polygons are associated with according attribute information such as orbit name, ground- and image resolution, solar longitude and illumination conditions to generate a powerful base for classification of planetary experiment data. Speed, handling and extended capabilites are the reasons for using geodatabases to store and access these data types. Techniques for such a spatial database of footprint data are demonstrated using the Relational Database Management System (RDBMS) PostgreSQL, spatially enabled by the PostGIS extension. Exemplary, footprints of the HRSC and OMEGA instruments, both onboard ESA's Mars Express Orbiter, are generated and connected to attribute information. The aim is to provide high-resolution footprints of the OMEGA instrument to the science community for the first time and make them available for web-based mapping applications like the "Planetary Interactive GIS-on-the-Web Analyzable Database" (PIG- WAD), produced by the USGS. Map overlays with HRSC or other instruments like MOC and THEMIS (footprint maps are already available for these instruments and can be integrated into the database) allow on-the-fly intersection and comparison as well as extended statistics of the data. Footprint polygons are generated one by one using standard software provided by the instrument teams. Attribute data is calculated and stored together with the geometric information. In the case of HRSC, the coordinates of the footprints are already available in the VICAR label of each image file. Using the VICAR RTL and PostgreSQL's libpq C library they are loaded into the database using the Well-Known Text (WKT) notation by the Open Geospatial Consortium, Inc. (OGC). For the OMEGA instrument, image data is read using IDL routines developed and distributed by the OMEGA team. Image outlines are exported together with relevant attribute data to the industry standard Shapefile format. These files are translated to a Structured Query Language (SQL) command sequence suitable for insertion into the PostGIS/PostgrSQL database using the shp2pgsql data loader provided by the PostGIS software. PostgreSQL's advanced features such as geometry types, rules, operators and functions allow complex spatial queries and on-the-fly processing of data on DBMS level e.g. generalisation of the outlines. Processing done by the DBMS, visualisation via GIS systems and utilisation for web-based applications like mapservers will be demonstrated.

  13. Determining the level of awareness of the physicians in using the variety of electronic information resources and the effecting factors.

    PubMed

    Papi, Ahmad; Ghazavi, Roghayeh; Moradi, Salimeh

    2015-01-01

    Understanding of the medical society's from the types of information resources for quick and easy access to information is an imperative task in medical researches and management of the treatment. The present study was aimed to determine the level of awareness of the physicians in using various electronic information resources and the factors affecting it. This study was a descriptive survey. The data collection tool was a researcher-made questionnaire. The study population included all the physicians and specialty physicians of the teaching hospitals affiliated to Isfahan University of Medical Sciences and numbered 350. The sample size based on Morgan's formula was set at 180. The content validity of the tool was confirmed by the library and information professionals and the reliability was 95%. Descriptive statistics were used including the SPSS software version 19. On reviewing the need of the physicians to obtain the information on several occasions, the need for information in conducting the researches was reported by the maximum number of physicians (91.9%) and the usage of information resources, especially the electronic resources, formed 65.4% as the highest rate with regard to meeting the information needs of the physicians. Among the electronic information databases, the maximum awareness was related to Medline with 86.5%. Among the various electronic information resources, the highest awareness (43.3%) was related to the E-journals. The highest usage (36%) was also from the same source. The studied physicians considered the most effective deterrent in the use of electronic information resources as being too busy and lack of time. Despite the importance of electronic information resources for the physician's community, there was no comprehensive knowledge of these resources. This can lead to less usage of these resources. Therefore, careful planning is necessary in the hospital libraries in order to introduce the facilities and full capabilities of the mentioned resources and methods of information retrieval.

  14. Medical Optimization Network for Space Telemedicine Resources

    NASA Technical Reports Server (NTRS)

    Rubin, D.; Shah, R. V.; Kerstman, E. L.; Reyes, D.; Mulcahy, R.; Antonsen, E.

    2017-01-01

    INTRODUCTION: Long-duration missions beyond low Earth orbit introduce new constraints to the space medical system. Beyond the traditional limitations in mass, power, and volume, consideration must be given to other factors such as the inability to evacuate to Earth, communication delays, and limitations in clinical skillsets. As NASA develops the medical system for an exploration mission, it must have an ability to evaluate the trade space of what resources will be most important. The Medical Optimization Network for Space Telemedicine Resources (MONSTR) was developed over the past year for this reason, and is now a system for managing data pertaining to medical resources and their relative importance when addressing medical conditions. METHODS: The MONSTR web application with a Microsoft SQL database backend was developed and made accessible to Tableau v9.3 for analysis and visualization. The database was initially populated with a list of medical conditions of concern for an exploration mission taken from the Integrated Medical Model (IMM), a probabilistic model designed to quantify in-flight medical risk. A team of physicians working within the Exploration Medical Capability Element of NASA's Human Research Program compiled a list diagnostic and treatment medical resources required to address best- and worst-case scenarios of each medical condition using a terrestrial standard of care and entered this data into the system. This list included both tangible resources (e.g. medical equipment, medications) and intangible resources (e.g. clinical skills required to perform a procedure). The physician team then assigned criticality values to each instance of a resource, representing the importance of that resource to diagnosing or treating its associated condition(s). Medical condition probabilities of occurrence during a Mars mission were pulled from the IMM and imported into the MONSTR database for use within a resource criticality-weighting algorithm. DISCUSSION: The MONSTR tool is a novel approach to assess the relative value of individual resources needed for the diagnosis and treatment of medical conditions. Future work will add resources for prevention and long term care of these conditions. Once data collection is complete, MONSTR will provide the operational and research communities at NASA with information to support informed decisions regarding areas of research investment, future crew training, and medical supplies manifested as part of any exploration medical system.

  15. The Relevancy of Graduate Curriculum to Human Resource Professionals' Electronic Communication.

    ERIC Educational Resources Information Center

    Hoell, Robert C.; Henry, Gordon O.

    2003-01-01

    Electronic communications of human resource professionals and the content of 23 university human resource management courses were categorized using the Human Resource Certification Institute's body of knowledge. Differences between proportion of topics discussed and topics covered in curricula suggest some topics are over- or undertaught.…

  16. The Human Phenotype Ontology project: linking molecular biology and disease through phenotype data

    PubMed Central

    Köhler, Sebastian; Doelken, Sandra C.; Mungall, Christopher J.; Bauer, Sebastian; Firth, Helen V.; Bailleul-Forestier, Isabelle; Black, Graeme C. M.; Brown, Danielle L.; Brudno, Michael; Campbell, Jennifer; FitzPatrick, David R.; Eppig, Janan T.; Jackson, Andrew P.; Freson, Kathleen; Girdea, Marta; Helbig, Ingo; Hurst, Jane A.; Jähn, Johanna; Jackson, Laird G.; Kelly, Anne M.; Ledbetter, David H.; Mansour, Sahar; Martin, Christa L.; Moss, Celia; Mumford, Andrew; Ouwehand, Willem H.; Park, Soo-Mi; Riggs, Erin Rooney; Scott, Richard H.; Sisodiya, Sanjay; Vooren, Steven Van; Wapner, Ronald J.; Wilkie, Andrew O. M.; Wright, Caroline F.; Vulto-van Silfhout, Anneke T.; de Leeuw, Nicole; de Vries, Bert B. A.; Washingthon, Nicole L.; Smith, Cynthia L.; Westerfield, Monte; Schofield, Paul; Ruef, Barbara J.; Gkoutos, Georgios V.; Haendel, Melissa; Smedley, Damian; Lewis, Suzanna E.; Robinson, Peter N.

    2014-01-01

    The Human Phenotype Ontology (HPO) project, available at http://www.human-phenotype-ontology.org, provides a structured, comprehensive and well-defined set of 10,088 classes (terms) describing human phenotypic abnormalities and 13,326 subclass relations between the HPO classes. In addition we have developed logical definitions for 46% of all HPO classes using terms from ontologies for anatomy, cell types, function, embryology, pathology and other domains. This allows interoperability with several resources, especially those containing phenotype information on model organisms such as mouse and zebrafish. Here we describe the updated HPO database, which provides annotations of 7,278 human hereditary syndromes listed in OMIM, Orphanet and DECIPHER to classes of the HPO. Various meta-attributes such as frequency, references and negations are associated with each annotation. Several large-scale projects worldwide utilize the HPO for describing phenotype information in their datasets. We have therefore generated equivalence mappings to other phenotype vocabularies such as LDDB, Orphanet, MedDRA, UMLS and phenoDB, allowing integration of existing datasets and interoperability with multiple biomedical resources. We have created various ways to access the HPO database content using flat files, a MySQL database, and Web-based tools. All data and documentation on the HPO project can be found online. PMID:24217912

  17. Recent improvements to Binding MOAD: a resource for protein–ligand binding affinities and structures

    PubMed Central

    Ahmed, Aqeel; Smith, Richard D.; Clark, Jordan J.; Dunbar, James B.; Carlson, Heather A.

    2015-01-01

    For over 10 years, Binding MOAD (Mother of All Databases; http://www.BindingMOAD.org) has been one of the largest resources for high-quality protein–ligand complexes and associated binding affinity data. Binding MOAD has grown at the rate of 1994 complexes per year, on average. Currently, it contains 23 269 complexes and 8156 binding affinities. Our annual updates curate the data using a semi-automated literature search of the references cited within the PDB file, and we have recently upgraded our website and added new features and functionalities to better serve Binding MOAD users. In order to eliminate the legacy application server of the old platform and to accommodate new changes, the website has been completely rewritten in the LAMP (Linux, Apache, MySQL and PHP) environment. The improved user interface incorporates current third-party plugins for better visualization of protein and ligand molecules, and it provides features like sorting, filtering and filtered downloads. In addition to the field-based searching, Binding MOAD now can be searched by structural queries based on the ligand. In order to remove redundancy, Binding MOAD records are clustered in different families based on 90% sequence identity. The new Binding MOAD, with the upgraded platform, features and functionalities, is now equipped to better serve its users. PMID:25378330

  18. Seismic waveform modeling over cloud

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.

  19. Availability, Level of Use and Constraints to Use of Electronic Resources by Law Lecturers in Public Universities in Nigeria

    ERIC Educational Resources Information Center

    Amusa, Oyintola Isiaka; Atinmo, Morayo

    2016-01-01

    (Purpose) This study surveyed the level of availability, use and constraints to use of electronic resources among law lecturers in Nigeria. (Methodology) Five hundred and fifty-two law lecturers were surveyed and four hundred and forty-two responded. (Results) Data analysis revealed that the level of availability of electronic resources for the…

  20. Effective Knowledge Development in Secondary Schools Educational Level in Contemporary Information Age: Assessment of Availability of Electronic Information Resources in Nigerian School Libraries

    ERIC Educational Resources Information Center

    Bello, Stephen Adeyemi; Ojo, Funmilayo Roseline; Ocheje, Charles Bala

    2015-01-01

    Relevant electronic information resources in contemporary information age are necessity to buttress teaching and learning for effective knowledge development in educational institutions. The purpose of the study is to know the state of availability of electronic information resources in government owned secondary school libraries in Ijumu Local…

  1. Use of traditional versus electronic medical-information resources by residents and interns.

    PubMed

    Phua, Jason; Lim, T K

    2007-05-01

    Little is known about the information-seeking behaviour of junior doctors, with regard to their use of traditional versus electronic sources of information. To evaluate the amount of time junior doctors spent using various medical-information resources and how useful they perceived these resources to be. A questionnaire study of all residents and interns in a tertiary teaching hospital in July and August 2004. In total, 134 doctors returned the completed questionnaires (response rate 79.8%). They spent the most time using traditional resources like teaching sessions and print textbooks, rating them as most useful. However, electronic resources like MEDLINE, UpToDate, and online review articles also ranked highly. Original research articles were less popular. Residents and interns prefer traditional sources of medical information. Meanwhile, though some electronic resources are rated highly, more work is required to remove the barriers to evidence-based medicine.

  2. The design and implementation of GML data management information system based on PostgreSQL

    NASA Astrophysics Data System (ADS)

    Zhang, Aiguo; Wu, Qunyong; Xu, Qifeng

    2008-10-01

    GML expresses geographic information in text, and it provides an extensible and standard way of spatial information encoding. At the present time, the management of GML data is in terms of document. By this way, the inquiry and update of GML data is inefficient, and it demands high memory when the document is comparatively large. In this respect, the paper put forward a data management of GML based on PostgreSQL. It designs four kinds of inquiries, which are inquiry of metadata, inquiry of geometry based on property, inquiry of property based on spatial information, and inquiry of spatial data based on location. At the same time, it designs and implements the visualization of the inquired WKT data.

  3. Monitoring SLAC High Performance UNIX Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less

  4. MaGnET: Malaria Genome Exploration Tool.

    PubMed

    Sharman, Joanna L; Gerloff, Dietlind L

    2013-09-15

    The Malaria Genome Exploration Tool (MaGnET) is a software tool enabling intuitive 'exploration-style' visualization of functional genomics data relating to the malaria parasite, Plasmodium falciparum. MaGnET provides innovative integrated graphic displays for different datasets, including genomic location of genes, mRNA expression data, protein-protein interactions and more. Any selection of genes to explore made by the user is easily carried over between the different viewers for different datasets, and can be changed interactively at any point (without returning to a search). Free online use (Java Web Start) or download (Java application archive and MySQL database; requires local MySQL installation) at http://malariagenomeexplorer.org joanna.sharman@ed.ac.uk or dgerloff@ffame.org Supplementary data are available at Bioinformatics online.

  5. MySQL/PHP web database applications for IPAC proposal submission

    NASA Astrophysics Data System (ADS)

    Crane, Megan K.; Storrie-Lombardi, Lisa J.; Silbermann, Nancy A.; Rebull, Luisa M.

    2008-07-01

    The Infrared Processing and Analysis Center (IPAC) is NASA's multi-mission center of expertise for long-wavelength astrophysics. Proposals for various IPAC missions and programs are ingested via MySQL/PHP web database applications. Proposers use web forms to enter coversheet information and upload PDF files related to the proposal. Upon proposal submission, a unique directory is created on the webserver into which all of the uploaded files are placed. The coversheet information is converted into a PDF file using a PHP extension called FPDF. The files are concatenated into one PDF file using the command-line tool pdftk and then forwarded to the review committee. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.

  6. LSST Resources for the Community

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne

    2011-01-01

    LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.

  7. You Have "How Many" Spreadsheets? Rethinking Electronic Resource Management

    ERIC Educational Resources Information Center

    Rux, Erika; Borchert, Theresa

    2010-01-01

    As libraries face a veritable explosion of electronic resources and as the interconnectedness of print and online resources becomes increasingly complicated, many librarians are challenged to find efficient and cost-friendly ways to manage these resources. In this article, the authors describe how a team of people from various library departments…

  8. Website creation and resource management: developing collaborative strategies for asynchronous interaction with library users.

    PubMed

    Hopkins, Mark E; Summers-Ables, Joy E; Clifton, Shari C; Coffman, Michael A

    2011-06-01

    To make electronic resources available to library users while effectively harnessing intellectual capital within the library, ultimately fostering the library's use of technology to interact asynchronously with its patrons (users). The methods used in the project included: (1) developing a new library website to facilitate the creation, management, accessibility, maintenance and dissemination of library resources; and (2) establishing ownership by those who participated in the project, while creating effective work allocation strategies through the implementation of a content management system that allowed the library to manage cost, complexity and interoperability. Preliminary results indicate that contributors to the system benefit from an increased understanding of the library's resources and add content valuable to library patrons. These strategies have helped promote the manageable creation and maintenance of electronic content in accomplishing the library's goal of interacting with its patrons. Establishment of a contributive system for adding to the library's electronic resources and electronic content has been successful. Further work will look at improving asynchronous interaction, particularly highlighting accessibility of electronic content and resources. © 2010 The authors. Health Information and Libraries Journal © 2010 Health Libraries Group.

  9. The Usage of Association Rule Mining to Identify Influencing Factors on Deafness After Birth.

    PubMed

    Shahraki, Azimeh Danesh; Safdari, Reza; Gahfarokhi, Hamid Habibi; Tahmasebian, Shahram

    2015-12-01

    Providing complete and high quality health care services has very important role to enable people to understand the factors related to personal and social health and to make decision regarding choice of suitable healthy behaviors in order to achieve healthy life. For this reason, demographic and clinical data of person are collecting, this huge volume of data can be known as a valuable resource for analyzing, exploring and discovering valuable information and communication. This study using forum rules techniques in the data mining has tried to identify the affecting factors on hearing loss after birth in Iran. The survey is kind of data oriented study. The population of the study is contained questionnaires in several provinces of the country. First, all data of questionnaire was implemented in the form of information table in Software SQL Server and followed by Data Entry using written software of C # .Net, then algorithm Association in SQL Server Data Tools software and Clementine software was implemented to determine the rules and hidden patterns in the gathered data. Two factors of number of deaf brothers and the degree of consanguinity of the parents have a significant impact on severity of deafness of individuals. Also, when the severity of hearing loss is greater than or equal to moderately severe hearing loss, people use hearing aids and Men are also less interested in the use of hearing aids. In fact, it can be said that in families with consanguineous marriage of parents that are from first degree (girl/boy cousins) and 2(nd) degree relatives (girl/boy cousins) and especially from first degree, the number of people with severe hearing loss or deafness are more and in the use of hearing aids, gender of the patient is more important than the severity of the hearing loss.

  10. DRUMS: Disk Repository with Update Management and Select option for high throughput sequencing data.

    PubMed

    Nettling, Martin; Thieme, Nils; Both, Andreas; Grosse, Ivo

    2014-02-04

    New technologies for analyzing biological samples, like next generation sequencing, are producing a growing amount of data together with quality scores. Moreover, software tools (e.g., for mapping sequence reads), calculating transcription factor binding probabilities, estimating epigenetic modification enriched regions or determining single nucleotide polymorphism increase this amount of position-specific DNA-related data even further. Hence, requesting data becomes challenging and expensive and is often implemented using specialised hardware. In addition, picking specific data as fast as possible becomes increasingly important in many fields of science. The general problem of handling big data sets was addressed by developing specialized databases like HBase, HyperTable or Cassandra. However, these database solutions require also specialized or distributed hardware leading to expensive investments. To the best of our knowledge, there is no database capable of (i) storing billions of position-specific DNA-related records, (ii) performing fast and resource saving requests, and (iii) running on a single standard computer hardware. Here, we present DRUMS (Disk Repository with Update Management and Select option), satisfying demands (i)-(iii). It tackles the weaknesses of traditional databases while handling position-specific DNA-related data in an efficient manner. DRUMS is capable of storing up to billions of records. Moreover, it focuses on optimizing relating single lookups as range request, which are needed permanently for computations in bioinformatics. To validate the power of DRUMS, we compare it to the widely used MySQL database. The test setting considers two biological data sets. We use standard desktop hardware as test environment. DRUMS outperforms MySQL in writing and reading records by a factor of two up to a factor of 10000. Furthermore, it can work with significantly larger data sets. Our work focuses on mid-sized data sets up to several billion records without requiring cluster technology. Storing position-specific data is a general problem and the concept we present here is a generalized approach. Hence, it can be easily applied to other fields of bioinformatics.

  11. Using Controlled Vocabularies and Semantics to Improve Ocean Data Discovery (Invited)

    NASA Astrophysics Data System (ADS)

    Chandler, C. L.; Groman, R. C.; Shepherd, A.; Allison, M. D.; Kinkade, D.; Rauch, S.; Wiebe, P. H.; Glover, D. M.

    2013-12-01

    The Biological and Chemical Oceanography Data Management Office (BCO-DMO) was created in late 2006, by combining the formerly independent data management offices for the U.S. GLOBal Ocean ECosystems Dynamics (GLOBEC) and U.S. Joint Global Ocean flux Study (JGOFS) programs. BCO-DMO staff members work with investigators to publish data from research projects funded by the NSF Geosciences Directorate (GEO) Division of Ocean Sciences (OCE) Biological and Chemical Oceanography Sections and Polar Programs (PLR) Antarctic Sciences Organisms & Ecosystems Program (ANT). Since 2006, researchers have been contributing new data to the BCO-DMO data system. As the data from new research efforts have been added to the data previously shared by U.S. GLOBEC and U.S. JGOFS researchers, the BCO-DMO system has developed into a rich repository of data from ocean, coastal, and Great Lakes research programs. The metadata records for the original research program data (prior to 2006) were stored in human-readable flat files of text, translated on-demand to Web-retrievable files. Beginning in 2006, the metadata records from multiple data systems managed by BCO-DMO were ingested into a relational database (MySQL). Since that time, efforts have been made to incorporate lists of controlled vocabulary terms for key information concepts stored in the MySQL database (e.g. names of research programs, deployments, instruments and measurements). This presents a challenge for a data system that includes legacy data and is continually expanding with the addition of new contributions. Over the years, BCO-DMO has developed a series of data delivery systems driven by the supporting metadata. Improved access to research data, a primary goal of the BCO-DMO project, is achieved through geospatial and text-based data access systems that support data discovery, access, display, assessment, integration, and export of data resources. The addition of a semantically-enabled search capability improves data discovery options particularly for those investigators whose research interests are cross-domain and multi-disciplinary. Current efforts by BCO-DMO staff members are focused on identifying globally unique, persistent identifiers to unambiguously identify resources of interest curated by and available from BCO-DMO. The process involves several essential components: (1) identifying a trusted authoritative source of complementary content and the appropriate contact; (2) determining the globally unique, persistent identifier system for resources of interest and (3) negotiating the requisite syntactic and semantic exchange systems. A variety of technologies have been deployed including: (1) controlled vocabulary term lists for some of the essential concepts/classes; (2) the Ocean Data Ontology; (3) publishing content as Linked Open Data and (4) SPARQL queries and inference. The final results are emerging as a semantic layer comprising domain-specific controlled vocabularies typed to community standard definitions, an ontology with the concepts and relationships needed to describe ocean data, a semantically-enabled faceted search, and inferencing services. We are exploring use of these technologies to improve the accuracy of the BCO-DMO data collection and to facilitate exchange of information with complementary ocean data repositories. Integrating a semantic layer into the BCO-DMO data system architecture improves data and information resource discovery, access and integration.

  12. The Tri-Agency Climate Education (TrACE) Catalog: Promoting collaboration, effective practice, and a robust portfolio by sharing educational resources developed across NASA, NOAA & NSF climate education initiatives

    NASA Astrophysics Data System (ADS)

    McDougall, C.; Martin, A.; Givens, S. M.; Yue, S.; Wilson, C. E.; Karsten, J. L.

    2012-12-01

    The Tri-Agency Climate Education (TrACE) Catalog is an online, interactive, searchable and browsable web product driven by a database backend. TrACE was developed for and by the community of educators, scientists, and Federal agency representatives involved in a tri-agency collaboration for climate education. NASA, NOAA, and NSF are working together to strategically coordinate and support a portfolio of projects focused on climate literacy and education in formal and informal learning environments. The activities of the tri-agency collaboration, including annual meetings for principal investigators and the ongoing development of a nascent common evaluation framework, have created a strong national network for effectively engaging diverse audiences with the principles of climate literacy (see Eos Vol. 92, No. 24, 14 June 2011). TrACE is a tool for the climate education community that promotes the goals of the tri-agency collaboration to leverage existing resources, minimize duplicate efforts, and facilitate communication among this emergent community of scientists and educators. TrACE was born as "The Matrix," a product of the 2011 Second Annual NASA, NOAA and NSF Climate Change Education Principal Investigators Meeting (see McDougall, Wilson, Martin & Knippenberg, 2011, Abstract ED21B-0583 presented at 2011 Fall Meeting, AGU, San Francisco, CA.) Meeting attendees were asked to populate a pen-and-paper matrix with all of the activities or deliverables they had created or anticipated creating as part of their NOAA/NASA/NSF-funded project. During the 2012 Third Annual Tri-Agency PI Meeting, projects were given the opportunity to add and update their products and deliverables. In the intervening year, the dataset comprising the Matrix was converted to a MySQL database, with a standardized taxonomy and minimum criteria for inclusion, and further developed into the interactive TrACE Catalog. In the fall of 2012, the TrACE Catalog web product will be made publicly available. The catalog currently contains information about 204 educational products and resources, representing 81 federally funded projects, categorized by audience type (e.g., K-12 students, public, decision makers, scientists) and resource type (e.g., curriculum, electronic media & tools, exhibits). The web interface will allow for searching, sorting, and browsing of available educational resources by audience type, product type, funding agency, and geographical region. Using this tool, PIs working on similar efforts or in similar bioregions will be able to locate, learn from, and collaborate with each other. The dataset is also useful for visualizing and assessing the breadth and depth of the tri-agency portfolio. In this poster presentation, representatives from the three collaborating agencies will demonstrate the functionality of the TrACE Catalog and the dataset that drives it. We will invite others who are working on similar efforts to add their anticipated/existing products.

  13. UbSRD: The Ubiquitin Structural Relational Database.

    PubMed

    Harrison, Joseph S; Jacobs, Tim M; Houlihan, Kevin; Van Doorslaer, Koenraad; Kuhlman, Brian

    2016-02-22

    The structurally defined ubiquitin-like homology fold (UBL) can engage in several unique protein-protein interactions and many of these complexes have been characterized with high-resolution techniques. Using Rosetta's structural classification tools, we have created the Ubiquitin Structural Relational Database (UbSRD), an SQL database of features for all 509 UBL-containing structures in the PDB, allowing users to browse these structures by protein-protein interaction and providing a platform for quantitative analysis of structural features. We used UbSRD to define the recognition features of ubiquitin (UBQ) and SUMO observed in the PDB and the orientation of the UBQ tail while interacting with certain types of proteins. While some of the interaction surfaces on UBQ and SUMO overlap, each molecule has distinct features that aid in molecular discrimination. Additionally, we find that the UBQ tail is malleable and can adopt a variety of conformations upon binding. UbSRD is accessible as an online resource at rosettadesign.med.unc.edu/ubsrd. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Simple Web-based interactive key development software (WEBiKEY) and an example key for Kuruna (Poaceae: Bambusoideae)1

    PubMed Central

    Attigala, Lakshmi; De Silva, Nuwan I.; Clark, Lynn G.

    2016-01-01

    Premise of the study: Programs that are user-friendly and freely available for developing Web-based interactive keys are scarce and most of the well-structured applications are relatively expensive. WEBiKEY was developed to enable researchers to easily develop their own Web-based interactive keys with fewer resources. Methods and Results: A Web-based multiaccess identification tool (WEBiKEY) was developed that uses freely available Microsoft ASP.NET technologies and an SQL Server database for Windows-based hosting environments. WEBiKEY was tested for its usability with a sample data set, the temperate woody bamboo genus Kuruna (Poaceae). Conclusions: WEBiKEY is freely available to the public and can be used to develop Web-based interactive keys for any group of species. The interactive key we developed for Kuruna using WEBiKEY enables users to visually inspect characteristics of Kuruna and identify an unknown specimen as one of seven possible species in the genus. PMID:27144109

  15. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less

  16. MSDB: A Comprehensive Database of Simple Sequence Repeats.

    PubMed

    Avvaru, Akshay Kumar; Saxena, Saketh; Sowpati, Divya Tej; Mishra, Rakesh Kumar

    2017-06-01

    Microsatellites, also known as Simple Sequence Repeats (SSRs), are short tandem repeats of 1-6 nt motifs present in all genomes, particularly eukaryotes. Besides their usefulness as genome markers, SSRs have been shown to perform important regulatory functions, and variations in their length at coding regions are linked to several disorders in humans. Microsatellites show a taxon-specific enrichment in eukaryotic genomes, and some may be functional. MSDB (Microsatellite Database) is a collection of >650 million SSRs from 6,893 species including Bacteria, Archaea, Fungi, Plants, and Animals. This database is by far the most exhaustive resource to access and analyze SSR data of multiple species. In addition to exploring data in a customizable tabular format, users can view and compare the data of multiple species simultaneously using our interactive plotting system. MSDB is developed using the Django framework and MySQL. It is freely available at http://tdb.ccmb.res.in/msdb. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  17. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less

  18. The ISB Cancer Genomics Cloud: A Flexible Cloud-Based Platform for Cancer Genomics Research.

    PubMed

    Reynolds, Sheila M; Miller, Michael; Lee, Phyliss; Leinonen, Kalle; Paquette, Suzanne M; Rodebaugh, Zack; Hahn, Abigail; Gibbs, David L; Slagel, Joseph; Longabaugh, William J; Dhankani, Varsha; Reyes, Madelyn; Pihl, Todd; Backus, Mark; Bookman, Matthew; Deflaux, Nicole; Bingham, Jonathan; Pot, David; Shmulevich, Ilya

    2017-11-01

    The ISB Cancer Genomics Cloud (ISB-CGC) is one of three pilot projects funded by the National Cancer Institute to explore new approaches to computing on large cancer datasets in a cloud environment. With a focus on Data as a Service, the ISB-CGC offers multiple avenues for accessing and analyzing The Cancer Genome Atlas, TARGET, and other important references such as GENCODE and COSMIC using the Google Cloud Platform. The open approach allows researchers to choose approaches best suited to the task at hand: from analyzing terabytes of data using complex workflows to developing new analysis methods in common languages such as Python, R, and SQL; to using an interactive web application to create synthetic patient cohorts and to explore the wealth of available genomic data. Links to resources and documentation can be found at www.isb-cgc.org Cancer Res; 77(21); e7-10. ©2017 AACR . ©2017 American Association for Cancer Research.

  19. EVpedia: a community web portal for extracellular vesicles research.

    PubMed

    Kim, Dae-Kyum; Lee, Jaewook; Kim, Sae Rom; Choi, Dong-Sic; Yoon, Yae Jin; Kim, Ji Hyun; Go, Gyeongyun; Nhung, Dinh; Hong, Kahye; Jang, Su Chul; Kim, Si-Hyun; Park, Kyong-Su; Kim, Oh Youn; Park, Hyun Taek; Seo, Ji Hye; Aikawa, Elena; Baj-Krzyworzeka, Monika; van Balkom, Bas W M; Belting, Mattias; Blanc, Lionel; Bond, Vincent; Bongiovanni, Antonella; Borràs, Francesc E; Buée, Luc; Buzás, Edit I; Cheng, Lesley; Clayton, Aled; Cocucci, Emanuele; Dela Cruz, Charles S; Desiderio, Dominic M; Di Vizio, Dolores; Ekström, Karin; Falcon-Perez, Juan M; Gardiner, Chris; Giebel, Bernd; Greening, David W; Gross, Julia Christina; Gupta, Dwijendra; Hendrix, An; Hill, Andrew F; Hill, Michelle M; Nolte-'t Hoen, Esther; Hwang, Do Won; Inal, Jameel; Jagannadham, Medicharla V; Jayachandran, Muthuvel; Jee, Young-Koo; Jørgensen, Malene; Kim, Kwang Pyo; Kim, Yoon-Keun; Kislinger, Thomas; Lässer, Cecilia; Lee, Dong Soo; Lee, Hakmo; van Leeuwen, Johannes; Lener, Thomas; Liu, Ming-Lin; Lötvall, Jan; Marcilla, Antonio; Mathivanan, Suresh; Möller, Andreas; Morhayim, Jess; Mullier, François; Nazarenko, Irina; Nieuwland, Rienk; Nunes, Diana N; Pang, Ken; Park, Jaesung; Patel, Tushar; Pocsfalvi, Gabriella; Del Portillo, Hernando; Putz, Ulrich; Ramirez, Marcel I; Rodrigues, Marcio L; Roh, Tae-Young; Royo, Felix; Sahoo, Susmita; Schiffelers, Raymond; Sharma, Shivani; Siljander, Pia; Simpson, Richard J; Soekmadji, Carolina; Stahl, Philip; Stensballe, Allan; Stępień, Ewa; Tahara, Hidetoshi; Trummer, Arne; Valadi, Hadi; Vella, Laura J; Wai, Sun Nyunt; Witwer, Kenneth; Yáñez-Mó, María; Youn, Hyewon; Zeidler, Reinhard; Gho, Yong Song

    2015-03-15

    Extracellular vesicles (EVs) are spherical bilayered proteolipids, harboring various bioactive molecules. Due to the complexity of the vesicular nomenclatures and components, online searches for EV-related publications and vesicular components are currently challenging. We present an improved version of EVpedia, a public database for EVs research. This community web portal contains a database of publications and vesicular components, identification of orthologous vesicular components, bioinformatic tools and a personalized function. EVpedia includes 6879 publications, 172 080 vesicular components from 263 high-throughput datasets, and has been accessed more than 65 000 times from more than 750 cities. In addition, about 350 members from 73 international research groups have participated in developing EVpedia. This free web-based database might serve as a useful resource to stimulate the emerging field of EV research. The web site was implemented in PHP, Java, MySQL and Apache, and is freely available at http://evpedia.info. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Visualizing Phenology and Climate Data at the National Scale

    NASA Astrophysics Data System (ADS)

    Rosemartin, A.; Marsh, L.

    2013-12-01

    Nature's Notebook is the USA National Phenology Network's national-scale plant and animal phenology observation program, designed to address the challenges posed by global change and its impacts on ecosystems and human health. Since its inception in 2009, 2,500 participants in Nature's Notebook have submitted 2.3 million records on the phenology of 17,000 organisms across the United States. An information architecture has been developed to facilitate collaboration and participatory data collection and digitization. Browser-based and mobile applications support data submission, and a MySQL/Drupal multi-site infrastructure enables data storage, access and discovery. Web services are available for both input and export of data resources. In this presentation we will focus on a tool for visualizing phenology data at the national scale. Effective data exploration for this multi-dimensional dataset requires the ability to plot sites, select species and phenophases, graph organismal phenology through time, and view integrated precipitation and temperature data. We will demonstrate the existing tool's capacity, discuss future directions and solicit feedback from the community.

  1. UCbase 2.0: ultraconserved sequences database (2014 update).

    PubMed

    Lomonaco, Vincenzo; Martoglia, Riccardo; Mandreoli, Federica; Anderlucci, Laura; Emmett, Warren; Bicciato, Silvio; Taccioli, Cristian

    2014-01-01

    UCbase 2.0 (http://ucbase.unimore.it) is an update, extension and evolution of UCbase, a Web tool dedicated to the analysis of ultraconserved sequences (UCRs). UCRs are 481 sequences >200 bases sharing 100% identity among human, mouse and rat genomes. They are frequently located in genomic regions known to be involved in cancer or differentially expressed in human leukemias and carcinomas. UCbase 2.0 is a platform-independent Web resource that includes the updated version of the human genome annotation (hg19), information linking disorders to chromosomal coordinates based on the Systematized Nomenclature of Medicine classification, a query tool to search for Single Nucleotide Polymorphisms (SNPs) and a new text box to directly interrogate the database using a MySQL interface. To facilitate the interactive visual interpretation of UCR chromosomal positioning, UCbase 2.0 now includes a graph visualization interface directly linked to UCSC genome browser. Database URL: http://ucbase.unimore.it. © The Author(s) 2014. Published by Oxford University Press.

  2. Retrovirus Integration Database (RID): a public database for retroviral insertion sites into host genomes.

    PubMed

    Shao, Wei; Shan, Jigui; Kearney, Mary F; Wu, Xiaolin; Maldarelli, Frank; Mellors, John W; Luke, Brian; Coffin, John M; Hughes, Stephen H

    2016-07-04

    The NCI Retrovirus Integration Database is a MySql-based relational database created for storing and retrieving comprehensive information about retroviral integration sites, primarily, but not exclusively, HIV-1. The database is accessible to the public for submission or extraction of data originating from experiments aimed at collecting information related to retroviral integration sites including: the site of integration into the host genome, the virus family and subtype, the origin of the sample, gene exons/introns associated with integration, and proviral orientation. Information about the references from which the data were collected is also stored in the database. Tools are built into the website that can be used to map the integration sites to UCSC genome browser, to plot the integration site patterns on a chromosome, and to display provirus LTRs in their inserted genome sequence. The website is robust, user friendly, and allows users to query the database and analyze the data dynamically. https://rid.ncifcrf.gov ; or http://home.ncifcrf.gov/hivdrp/resources.htm .

  3. Easily configured real-time CPOE Pick Off Tool supporting focused clinical research and quality improvement.

    PubMed

    Rosenbaum, Benjamin P; Silkin, Nikolay; Miller, Randolph A

    2014-01-01

    Real-time alerting systems typically warn providers about abnormal laboratory results or medication interactions. For more complex tasks, institutions create site-wide 'data warehouses' to support quality audits and longitudinal research. Sophisticated systems like i2b2 or Stanford's STRIDE utilize data warehouses to identify cohorts for research and quality monitoring. However, substantial resources are required to install and maintain such systems. For more modest goals, an organization desiring merely to identify patients with 'isolation' orders, or to determine patients' eligibility for clinical trials, may adopt a simpler, limited approach based on processing the output of one clinical system, and not a data warehouse. We describe a limited, order-entry-based, real-time 'pick off' tool, utilizing public domain software (PHP, MySQL). Through a web interface the tool assists users in constructing complex order-related queries and auto-generates corresponding database queries that can be executed at recurring intervals. We describe successful application of the tool for research and quality monitoring.

  4. The database of the Nikolaev Astronomical Observatory as a unit of an international virtual observatory

    NASA Astrophysics Data System (ADS)

    Protsyuk, Yu.; Pinigin, G.; Shulga, A.

    2005-06-01

    Results of the development and organization of the digital database of the Nikolaev Astronomical Observatory (NAO) are presented. At present, three telescopes are connected to the local area network of NAO. All the data obtained, and results of data processing are entered into the common database of NAO. The daily average volume of new astronomical information obtained from the CCD instruments ranges from 300 MB up to 2 GB, depending on the purposes and conditions of observations. The overwhelming majority of the data are stored in the FITS format. Development and further improvement of storage standards, procedures of data handling and data processing are being carried out. It is planned to create an astronomical web portal with the possibility to have interactive access to databases and telescopes. In the future, this resource may become a part of an international virtual observatory. There are the prototypes of search tools with the use of PHP and MySQL. Efforts for getting more links to the Internet are being made.

  5. BIRS – Bioterrorism Information Retrieval System

    PubMed Central

    Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar

    2013-01-01

    Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. Availability The database is freely available at http://www.bioterrorism.biowaves.org PMID:23390356

  6. The Michigan Electronic Library.

    ERIC Educational Resources Information Center

    Davidsen, Susanna L.

    1997-01-01

    Describes the Michigan Electronic Library (MEL), the largest evaluated and organized Web-based library of Internet resources, that was designed to provide a library of electronic information resources selected by librarians. MEL's partnership is explained, the collection is described, and future developments are considered. (LRW)

  7. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    PubMed Central

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes. PMID:25202745

  8. Database Reports Over the Internet

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  9. PiCO QL: A software library for runtime interactive queries on program data

    NASA Astrophysics Data System (ADS)

    Fragkoulis, Marios; Spinellis, Diomidis; Louridas, Panos

    PiCO QL is an open source C/C++ software whose scientific scope is real-time interactive analysis of in-memory data through SQL queries. It exposes a relational view of a system's or application's data structures, which is queryable through SQL. While the application or system is executing, users can input queries through a web-based interface or issue web service requests. Queries execute on the live data structures through the respective relational views. PiCO QL makes a good candidate for ad-hoc data analysis in applications and for diagnostics in systems settings. Applications of PiCO QL include the Linux kernel, the Valgrind instrumentation framework, a GIS application, a virtual real-time observatory of stellar objects, and a source code analyser.

  10. The unified database for the fixed target experiment BM@N

    NASA Astrophysics Data System (ADS)

    Gertsenberger, K. V.

    2016-09-01

    The article describes the developed database designed as comprehensive data storage of the fixed target experiment BM@N [1] at Joint Institute for Nuclear Research (JINR) in Dubna. The structure and purposes of the BM@N facility will be briefly presented. The scheme of the unified database and its parameters will be described in detail. The use of the BM@N database implemented on the PostgreSQL database management system (DBMS) allows one to provide user access to the actual information of the experiment. Also the interfaces developed for the access to the database will be presented. One was implemented as the set of C++ classes to access the data without SQL statements, the other-Web-interface being available on the Web page of the BM@N experiment.

  11. ExplorEnz: the primary source of the IUBMB enzyme list

    PubMed Central

    McDonald, Andrew G.; Boyce, Sinéad; Tipton, Keith F.

    2009-01-01

    ExplorEnz is the MySQL database that is used for the curation and dissemination of the International Union of Biochemistry and Molecular Biology (IUBMB) Enzyme Nomenclature. A simple web-based query interface is provided, along with an advanced search engine for more complex Boolean queries. The WWW front-end is accessible at http://www.enzyme-database.org, from where downloads of the database as SQL and XML are also available. An associated form-based curatorial application has been developed to facilitate the curation of enzyme data as well as the internal and public review processes that occur before an enzyme entry is made official. Suggestions for new enzyme entries, or modifications to existing ones, can be made using the forms provided at http://www.enzyme-database.org/forms.php. PMID:18776214

  12. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  13. Strategic Planning for Electronic Resources Management: A Case Study at Gustavus Adolphus College

    ERIC Educational Resources Information Center

    Hulseberg, Anna; Monson, Sarah

    2009-01-01

    Electronic resources, the tools we use to manage them, and the needs and expectations of our users are constantly evolving; at the same time, the roles, responsibilities, and workflow of the library staff who manage e-resources are also in flux. Recognizing a need to be more intentional and proactive about how we manage e-resources, the…

  14. ERM Ideas and Innovations: Digital Repository Management as ERM

    ERIC Educational Resources Information Center

    Pinkas, María M.; Lin, Na

    2014-01-01

    This article describes the application of electronic resources management (ERM) to digital repository management at the Health Sciences and Human Services Library at the University of Maryland, Baltimore. The authors discuss electronic resources management techniques, through the application of "Techniques for Electronic Management,"…

  15. Managing Tradeoffs in the Electronic Age.

    ERIC Educational Resources Information Center

    Wagner, A. Ben

    2003-01-01

    Provides an overview of the development of electronic resources over the past three decades, discussing key features, disadvantages, and benefits of traditional online databases and CD-ROM and Web-based resources. Considers the decision to shift collections and resources toward purely digital formats, ownership of content, licensing, and user…

  16. The Internet School of Medicine: use of electronic resources by medical trainees and the reliability of those resources.

    PubMed

    Egle, Jonathan P; Smeenge, David M; Kassem, Kamal M; Mittal, Vijay K

    2015-01-01

    Electronic sources of medical information are plentiful, and numerous studies have demonstrated the use of the Internet by patients and the variable reliability of these sources. Studies have investigated neither the use of web-based resources by residents, nor the reliability of the information available on these websites. A web-based survey was distributed to surgical residents in Michigan and third- and fourth-year medical students at an American allopathic and osteopathic medical school and a Caribbean allopathic school regarding their preferred sources of medical information in various situations. A set of 254 queries simulating those faced by medical trainees on rounds, on a written examination, or during patient care was developed. The top 5 electronic resources cited by the trainees were evaluated for their ability to answer these questions accurately, using standard textbooks as the point of reference. The respondents reported a wide variety of overall preferred resources. Most of the 73 responding medical trainees favored textbooks or board review books for prolonged studying, but electronic resources are frequently used for quick studying, clinical decision-making questions, and medication queries. The most commonly used electronic resources were UpToDate, Google, Medscape, Wikipedia, and Epocrates. UpToDate and Epocrates had the highest percentage of correct answers (47%) and Wikipedia had the lowest (26%). Epocrates also had the highest percentage of wrong answers (30%), whereas Google had the lowest percentage (18%). All resources had a significant number of questions that they were unable to answer. Though hardcopy books have not been completely replaced by electronic resources, more than half of medical students and nearly half of residents prefer web-based sources of information. For quick questions and studying, both groups prefer Internet sources. However, the most commonly used electronic resources fail to answer clinical queries more than half of the time and have an alarmingly high rate of inaccurate information. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  17. JSC earth resources data analysis capabilities available to EOD revision B

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A list and summary description of all Johnson Space Center electronic laboratory and photographic laboratory capabilities available to earth resources division personnel for processing earth resources data are provided. The electronic capabilities pertain to those facilities and systems that use electronic and/or photographic products as output. The photographic capabilities pertain to equipment that uses photographic images as input and electronic and/or table summarizes processing steps. A general hardware description is presented for each of the data processing systems, and the titles of computer programs are used to identify the capabilities and data flow.

  18. PLMD: An updated data resource of protein lysine modifications.

    PubMed

    Xu, Haodong; Zhou, Jiaqi; Lin, Shaofeng; Deng, Wankun; Zhang, Ying; Xue, Yu

    2017-05-20

    Post-translational modifications (PTMs) occurring at protein lysine residues, or protein lysine modifications (PLMs), play critical roles in regulating biological processes. Due to the explosive expansion of the amount of PLM substrates and the discovery of novel PLM types, here we greatly updated our previous studies, and presented a much more integrative resource of protein lysine modification database (PLMD). In PLMD, we totally collected and integrated 284,780 modification events in 53,501 proteins across 176 eukaryotes and prokaryotes for up to 20 types of PLMs, including ubiquitination, acetylation, sumoylation, methylation, succinylation, malonylation, glutarylation, glycation, formylation, hydroxylation, butyrylation, propionylation, crotonylation, pupylation, neddylation, 2-hydroxyisobutyrylation, phosphoglycerylation, carboxylation, lipoylation and biotinylation. Using the data set, a motif-based analysis was performed for each PLM type, and the results demonstrated that different PLM types preferentially recognize distinct sequence motifs for the modifications. Moreover, various PLMs synergistically orchestrate specific cellular biological processes by mutual crosstalks with each other, and we totally found 65,297 PLM events involved in 90 types of PLM co-occurrences on the same lysine residues. Finally, various options were provided for accessing the data, while original references and other annotations were also present for each PLM substrate. Taken together, we anticipated the PLMD database can serve as a useful resource for further researches of PLMs. PLMD 3.0 was implemented in PHP + MySQL and freely available at http://plmd.biocuckoo.org. Copyright © 2017 Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, and Genetics Society of China. Published by Elsevier Ltd. All rights reserved.

  19. Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System

    PubMed Central

    Price, Ronald N.; Hernandez, Kim

    2001-01-01

    Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.

  20. 30 CFR 1210.54 - Must I submit this royalty report electronically?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Must I submit this royalty report electronically? 1210.54 Section 1210.54 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR Natural Resources Revenue FORMS AND REPORTS Royalty Reports-Oil, Gas, and...

  1. Model for Presenting Resources in Scholar's Portal

    ERIC Educational Resources Information Center

    Feeney, Mary; Newby, Jill

    2005-01-01

    Presenting electronic resources to users through a federated search engine introduces unique opportunities and challenges to libraries. This article reports on the decision-making tools and processes used for selecting collections of electronic resources by a project team at the University of Arizona (UA) Libraries for the Association of Research…

  2. A Low-Cost Audio Prescription Labeling System Using RFID for Thai Visually-Impaired People.

    PubMed

    Lertwiriyaprapa, Titipong; Fakkheow, Pirapong

    2015-01-01

    This research aims to develop a low-cost audio prescription labeling (APL) system for visually-impaired people by using the RFID system. The developed APL system includes the APL machine and APL software. The APL machine is for visually-impaired people while APL software allows caregivers to record all important information into the APL machine. The main objective of the development of the APL machine is to reduce costs and size by designing all of the electronic devices to fit into one print circuit board. Also, it is designed so that it is easy to use and can become an electronic aid for daily living. The developed APL software is based on Java and MySQL, both of which can operate on various operating platforms and are easy to develop as commercial software. The developed APL system was first evaluated by 5 experts. The APL system was also evaluated by 50 actual visually-impaired people (30 elders and 20 blind individuals) and 20 caregivers, pharmacists and nurses. After using the APL system, evaluations were carried out, and it can be concluded from the evaluation results that this proposed APL system can be effectively used for helping visually-impaired people in terms of self-medication.

  3. Electronic Reference Works and Library Budgeting Dilemma

    ERIC Educational Resources Information Center

    Lawal, Ibironke O.

    2007-01-01

    The number of electronic resources has climbed up steadily in recent times. Some of these e-resources are reference sources, mostly in Science, Technology and Medicine (STM), which publishers convert to electronic for obvious reasons. The library budgets for materials usually have two main lines, budget for one time purchase (monographs) and…

  4. Electronic Information Resources in Undergraduate Education: An Exploratory Study of Opportunities for Student Learning and Independence.

    ERIC Educational Resources Information Center

    McDowell, Liz

    2002-01-01

    This qualitative interview-based study examines lecturer perspectives on the roles of electronic information resources in undergraduate education. Highlights include electronic academic libraries; changes toward more constructivist approaches to learning; information quality on the Web; plagiarism; information use; information literacy; and…

  5. Opening a Can of wERMS: Texas A&M University's Experiences in Implementing Two Electronic Resource Management Systems

    ERIC Educational Resources Information Center

    Hartnett, Eric; Price, Apryl; Smith, Jane; Barrett, Michael

    2010-01-01

    Over the past few years, Texas A&M University (TAMU) has searched for a way to administer its electronic subscriptions as well as the electronic subscriptions shared among the TAMU System. In this article, we address our attempts to implement an effective electronic resource management system (ERMS), both for subscriptions on the main campus…

  6. An Evaluation of Electronic Product Design Education Using Hypermedia-Resourced Learning Environments

    ERIC Educational Resources Information Center

    Page, Tom; Thorsteinsson, Gisli

    2006-01-01

    The work outlined here provides a comprehensive report and formative observations of the development and implementation of hypermedia resources for learning and teaching used in conjunction with a managed learning environment (MLE). These resources are used to enhance teaching and learning of an electronics module in product design at final year…

  7. Impact of Knowledge Resources Linked to an Electronic Health Record on Frequency of Unnecessary Tests and Treatments

    ERIC Educational Resources Information Center

    Goodman, Kenneth; Grad, Roland; Pluye, Pierre; Nowacki, Amy; Hickner, John

    2012-01-01

    Introduction: Electronic knowledge resources have the potential to rapidly provide answers to clinicians' questions. We sought to determine clinicians' reasons for searching these resources, the rate of finding relevant information, and the perceived clinical impact of the information they retrieved. Methods: We asked general internists, family…

  8. E-Resources Management: How We Positioned Our Organization to Implement an Electronic Resources Management System

    ERIC Educational Resources Information Center

    White, Marilyn; Sanders, Susan

    2009-01-01

    The Information Services Division (ISD) of the National Institute of Standards and Technology (NIST) positioned itself to successfully implement an electronic resources management system. This article highlights the ISD's unique ability to "team" across the organization to realize a common goal, develop leadership qualities in support of…

  9. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    PubMed

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  10. A web application to support telemedicine services in Brazil.

    PubMed

    Barbosa, Ana Karina P; de A Novaes, Magdala; de Vasconcelos, Alexandre M L

    2003-01-01

    This paper describes a system that has been developed to support Telemedicine activities in Brazil, a country that has serious problems in the delivery of health services. The system is a part of the broader Tele-health Project that has been developed to make health services more accessible to the low-income population in the northeast region. The HealthNet system is based upon a pilot area that uses fetal and pediatric cardiology. This article describes both the system's conceptual model, including the tele-diagnosis and second medical opinion services, as well as its architecture and development stages. The system model describes both collaborating tools used asynchronously, such as discussion forums, and synchronous tools, such as videoconference services. Web and free-of-charge tools are utilized for implementation, such as Java and MySQL database. Furthermore, an interface with Electronic Patient Record (EPR) systems using Extended Markup Language (XML) technology is also proposed. Finally, considerations concerning the development and implementation process are presented.

  11. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    PubMed Central

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  12. Querying Event Sequences by Exact Match or Similarity Search: Design and Empirical Evaluation

    PubMed Central

    Wongsuphasawat, Krist; Plaisant, Catherine; Taieb-Maimon, Meirav; Shneiderman, Ben

    2012-01-01

    Specifying event sequence queries is challenging even for skilled computer professionals familiar with SQL. Most graphical user interfaces for database search use an exact match approach, which is often effective, but near misses may also be of interest. We describe a new similarity search interface, in which users specify a query by simply placing events on a blank timeline and retrieve a similarity-ranked list of results. Behind this user interface is a new similarity measure for event sequences which the users can customize by four decision criteria, enabling them to adjust the impact of missing, extra, or swapped events or the impact of time shifts. We describe a use case with Electronic Health Records based on our ongoing collaboration with hospital physicians. A controlled experiment with 18 participants compared exact match and similarity search interfaces. We report on the advantages and disadvantages of each interface and suggest a hybrid interface combining the best of both. PMID:22379286

  13. PACSY, a relational database management system for protein structure and chemical shift analysis.

    PubMed

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  14. Rapid storage and retrieval of genomic intervals from a relational database system using nested containment lists

    PubMed Central

    Wiley, Laura K.; Sivley, R. Michael; Bush, William S.

    2013-01-01

    Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist PMID:23894185

  15. Rapid storage and retrieval of genomic intervals from a relational database system using nested containment lists.

    PubMed

    Wiley, Laura K; Sivley, R Michael; Bush, William S

    2013-01-01

    Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.

  16. MaGnET: Malaria Genome Exploration Tool

    PubMed Central

    Sharman, Joanna L.; Gerloff, Dietlind L.

    2013-01-01

    Summary: The Malaria Genome Exploration Tool (MaGnET) is a software tool enabling intuitive ‘exploration-style’ visualization of functional genomics data relating to the malaria parasite, Plasmodium falciparum. MaGnET provides innovative integrated graphic displays for different datasets, including genomic location of genes, mRNA expression data, protein–protein interactions and more. Any selection of genes to explore made by the user is easily carried over between the different viewers for different datasets, and can be changed interactively at any point (without returning to a search). Availability and Implementation: Free online use (Java Web Start) or download (Java application archive and MySQL database; requires local MySQL installation) at http://malariagenomeexplorer.org Contact: joanna.sharman@ed.ac.uk or dgerloff@ffame.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23894142

  17. A SQL-Database Based Meta-CASE System and its Query Subsystem

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki; Sgirka, Rünno

    Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.

  18. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  19. WebBio, a web-based management and analysis system for patient data of biological products in hospital.

    PubMed

    Lu, Ying-Hao; Kuo, Chen-Chun; Huang, Yaw-Bin

    2011-08-01

    We selected HTML, PHP and JavaScript as the programming languages to build "WebBio", a web-based system for patient data of biological products and used MySQL as database. WebBio is based on the PHP-MySQL suite and is run by Apache server on Linux machine. WebBio provides the functions of data management, searching function and data analysis for 20 kinds of biological products (plasma expanders, human immunoglobulin and hematological products). There are two particular features in WebBio: (1) pharmacists can rapidly find out whose patients used contaminated products for medication safety, and (2) the statistics charts for a specific product can be automatically generated to reduce pharmacist's work loading. WebBio has successfully turned traditional paper work into web-based data management.

  20. A Comparison of Different Database Technologies for the CMS AsyncStageOut Transfer Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciangottini, D.; Balcas, J.; Mascheroni, M.

    AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses amore » NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.« less

  1. A comparison of different database technologies for the CMS AsyncStageOut transfer database

    NASA Astrophysics Data System (ADS)

    Ciangottini, D.; Balcas, J.; Mascheroni, M.; Rupeika, E. A.; Vaandering, E.; Riahi, H.; Silva, J. M. D.; Hernandez, J. M.; Belforte, S.; Ivanov, T. T.

    2017-10-01

    AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.

  2. HbA1c values calculated from blood glucose levels using truncated Fourier series and implementation in standard SQL database language.

    PubMed

    Temsch, W; Luger, A; Riedl, M

    2008-01-01

    This article presents a mathematical model to calculate HbA1c values based on self-measured blood glucose and past HbA1c levels, thereby enabling patients to monitor diabetes therapy between scheduled checkups. This method could help physicians to make treatment decisions if implemented in a system where glucose data are transferred to a remote server. The method, however, cannot replace HbA1c measurements; past HbA1c values are needed to gauge the method. The mathematical model of HbA1c formation was developed based on biochemical principles. Unlike an existing HbA1c formula, the new model respects the decreasing contribution of older glucose levels to current HbA1c values. About 12 standard SQL statements embedded in a php program were used to perform Fourier transform. Regression analysis was used to gauge results with previous HbA1c values. The method can be readily implemented in any SQL database. The predicted HbA1c values thus obtained were in accordance with measured values. They also matched the results of the HbA1c formula in the elevated range. By contrast, the formula was too "optimistic" in the range of better glycemic control. Individual analysis of two subjects improved the accuracy of values and reflected the bias introduced by different glucometers and individual measurement habits.

  3. NRF2-ome: an integrated web resource to discover protein interaction and regulatory networks of NRF2.

    PubMed

    Türei, Dénes; Papp, Diána; Fazekas, Dávid; Földvári-Nagy, László; Módos, Dezső; Lenti, Katalin; Csermely, Péter; Korcsmáros, Tamás

    2013-01-01

    NRF2 is the master transcriptional regulator of oxidative and xenobiotic stress responses. NRF2 has important roles in carcinogenesis, inflammation, and neurodegenerative diseases. We developed an online resource, NRF2-ome, to provide an integrated and systems-level database for NRF2. The database contains manually curated and predicted interactions of NRF2 as well as data from external interaction databases. We integrated NRF2 interactome with NRF2 target genes, NRF2 regulating TFs, and miRNAs. We connected NRF2-ome to signaling pathways to allow mapping upstream NRF2 regulatory components that could directly or indirectly influence NRF2 activity totaling 35,967 protein-protein and signaling interactions. The user-friendly website allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. We illustrated the applicability of the website by suggesting a posttranscriptional negative feedback of NRF2 by MAFG protein and raised the possibility of a connection between NRF2 and the JAK/STAT pathway through STAT1 and STAT3. NRF2-ome can also be used as an evaluation tool to help researchers and drug developers to understand the hidden regulatory mechanisms in the complex network of NRF2.

  4. Recent improvements to Binding MOAD: a resource for protein-ligand binding affinities and structures.

    PubMed

    Ahmed, Aqeel; Smith, Richard D; Clark, Jordan J; Dunbar, James B; Carlson, Heather A

    2015-01-01

    For over 10 years, Binding MOAD (Mother of All Databases; http://www.BindingMOAD.org) has been one of the largest resources for high-quality protein-ligand complexes and associated binding affinity data. Binding MOAD has grown at the rate of 1994 complexes per year, on average. Currently, it contains 23,269 complexes and 8156 binding affinities. Our annual updates curate the data using a semi-automated literature search of the references cited within the PDB file, and we have recently upgraded our website and added new features and functionalities to better serve Binding MOAD users. In order to eliminate the legacy application server of the old platform and to accommodate new changes, the website has been completely rewritten in the LAMP (Linux, Apache, MySQL and PHP) environment. The improved user interface incorporates current third-party plugins for better visualization of protein and ligand molecules, and it provides features like sorting, filtering and filtered downloads. In addition to the field-based searching, Binding MOAD now can be searched by structural queries based on the ligand. In order to remove redundancy, Binding MOAD records are clustered in different families based on 90% sequence identity. The new Binding MOAD, with the upgraded platform, features and functionalities, is now equipped to better serve its users. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Chandra monitoring, trends, and response

    NASA Astrophysics Data System (ADS)

    Spitzbart, Brad D.; Wolk, Scott J.; Isobe, Takashi

    2002-12-01

    The Chandra X-ray Observatory was launched in July, 1999 and has yielded extraordinary scientific results. Behind the scenes, our Monitoring and Trends Analysis (MTA) system has proven to be a valuable resource. With three years worth of on-orbit data, we have available a vast array of both telescope diagnostic information and analysis of scientific data to access Observatory performance. As part of Chandra's Science Operations Team (SOT), the primary goal of MTA is to provide tools for effective decision making leading to the most efficient production of quality science output from the Observatory. We occupy a middle ground between flight operations, chiefly concerned with the health and safety of the spacecraft, and validation and verification, concerned with the scientific validity of the data taken and whether or not they fulfill the observer's requirements. In that role we provide and receive support from systems engineers, instrument experts, operations managers, and scientific users. MTA tools, products, and services include real-time monitoring and alert generation for the most mission critical components, long term trending of all spacecraft systems, detailed analysis of various subsystems for life expectancy or anomaly resolution, and creating and maintaining a large SQL database of relevant information. This is accomplished through the use of a wide variety of input data sources and flexible, accessible programming and analysis techniques. This paper will discuss the overall design of the system, its evolution and the resources available.

  6. NASA Parts Selection List (NPSL) WWW Site http://nepp.nasa.gov/npsl

    NASA Technical Reports Server (NTRS)

    Brusse, Jay

    2000-01-01

    The NASA Parts Selection List (NPSL) is an on-line resource for electronic parts selection tailored for use by spaceflight projects. The NPSL provides a list of commonly used electronic parts that have a history of satisfactory use in spaceflight applications. The objective of this www site is to provide NASA projects, contractors, university experimenters, et al with an easy to use resource that provides a baseline of electronic parts from which designers are encouraged to select. The NPSL is an ongoing resource produced by Code 562 in support of the NASA HQ funded NASA Electronic Parts and Packaging (NEPP) Program. The NPSL is produced as an electronic format deliverable made available via the referenced www site administered by Code 562. The NPSL does not provide information pertaining to patented or proprietary information. All of the information contained in the NPSL is available through various other public domain resources such as US Military procurement specifications for electronic parts, NASA GSFC's Preferred Parts List (PPL-21), and NASA's Standard Parts List (MIL-STD975).

  7. Mediagraphy: Print and Nonprint Resources.

    ERIC Educational Resources Information Center

    Educational Media and Technology Yearbook, 1998

    1998-01-01

    Lists educational media-related journals, books, ERIC documents, journal articles, and nonprint resources classified by Artificial Intelligence, Robotics, Electronic Performance Support Systems; Computer-Assisted Instruction; Distance Education; Educational Research; Educational Technology; Electronic Publishing; Information Science and…

  8. Video Killed the Radio Star: Language Students' Use of Electronic Resources-Reading or Viewing?

    ERIC Educational Resources Information Center

    Kiliçkaya, Ferit

    2016-01-01

    The current study aimed to investigate language students' use of print and electronic resources for their research papers required in research techniques class, focusing on which reading strategies they used while reading these resources. The participants of the study were 90 sophomore students enrolled in the research techniques class offered at…

  9. Managing Selection for Electronic Resources: Kent State University Develops a New System to Automate Selection

    ERIC Educational Resources Information Center

    Downey, Kay

    2012-01-01

    Kent State University has developed a centralized system that manages the communication and work related to the review and selection of commercially available electronic resources. It is an automated system that tracks the review process, provides selectors with price and trial information, and compiles reviewers' feedback about the resource. It…

  10. Astronomical databases of Nikolaev Observatory

    NASA Astrophysics Data System (ADS)

    Protsyuk, Y.; Mazhaev, A.

    2008-07-01

    Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.

  11. Connecting Print and Electronic Titles: An Integrated Approach at the University of Nebraska-Lincoln

    ERIC Educational Resources Information Center

    Wolfe, Judith; Konecky, Joan Latta; Boden, Dana W. R.

    2011-01-01

    Libraries make heavy investments in electronic resources, with many of these resources reflecting title changes, bundled subsets, or content changes of formerly print material. These changes can distance the electronic format from its print origins, creating discovery and access issues. A task force was formed to explore the enhancement of catalog…

  12. Library Instruction in the Electronic Library: The University of Arizona's Electronic Library Education Centers.

    ERIC Educational Resources Information Center

    Glogoff, Stuart

    1995-01-01

    Discusses two Electronic Library Education Centers (ELECs) created at the University of Arizona to improve library instruction in the use of online resources. Examines costs of developing ELECs; technical changes experienced; and benefits to users and librarians. A sidebar by Abbie J. Basile identifies Internet resources for planning and/or…

  13. Library use and information-seeking behavior of veterinary medical students revisited in the electronic environment.

    PubMed

    Pelzer, N L; Wiese, W H; Leysen, J M

    1998-07-01

    Veterinary medical students at Iowa State University were surveyed in January of 1997 to determine their general use of the Veterinary Medical Library and how they sought information in an electronic environment. Comparisons were made between this study and one conducted a decade ago to determine the effect of the growth in electronic resources on student library use and information-seeking behavior. The basic patterns of student activities in the library, resources used to find current information, and resources anticipated for future education needs remained unchanged. The 1997 students used the library most frequently for photocopying, office supplies, and studying coursework; they preferred textbooks and handouts as sources of current information. However, when these students went beyond textbooks and handouts to seek current information, a major shift was seen from the use of print indexes and abstracts in 1987 towards the use of computerized indexes and other electronic resources in 1997. Almost 60% of the students reported using the Internet for locating current information. Overall use of electronic materials was highest among a group of students receiving the problem-based learning method of instruction. Most of the students surveyed in 1997 indicated that electronic resources would have some degree of importance to them for future education needs. The electronic environment has provided new opportunities for information professionals to help prepare future veterinarians, some of whom will be practicing in remote geographical locations, to access the wealth of information and services available on the Internet and Web.

  14. Toxicity ForeCaster (ToxCast™) Data

    EPA Pesticide Factsheets

    Data is organized into different data sets and includes descriptions of ToxCast chemicals and assays and files summarizing the screening results, a MySQL database, chemicals screened through Tox21, and available data generated from animal toxicity studies.

  15. 76 FR 26729 - Ceridian Corporation; Analysis of Proposed Consent Order to Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-09

    ... result of these failures, hackers executed an SQL injection attack on the Powerpay Web site and Web application. Through this attack, the hackers found personal information stored in Powerpay on Ceridian's...

  16. Similarity extraction mechanism concerning historical personalities based on SQL queries in an RDBMS environment

    NASA Astrophysics Data System (ADS)

    Barouchou, Alexandra; Dendrinos, Markos

    2015-02-01

    An interesting issue in the domain of history of science and ideas is the concept of similarity of historical personalities. Similar objects of research of philosophers and scientists indicate prospective influences, caused either from one another's reading or meetings, communication or even cooperation. Key methodological role in the surfacing of the sought similarities play the keywords extracted from their works as well as their placement in a philosophical and scientific term taxonomy. The case study examined in the framework of this paper concerns scientists and philosophers, who lived in ancient Greece or Renaissance periods and dealt, in at least one work, with the subject God. All the available data (scientists, studies, recorded relations between scientists, keywords, and thematic hierarchy) have been organized in an RDBMS environment, aiming at the emergence of similarities and influences between scientists through properly created SQL queries based on date and thematic hierarchy criteria.

  17. Evaluation of the performance of open-source RDBMS and triplestores for storing medical data over a web service.

    PubMed

    Kilintzis, Vassilis; Beredimas, Nikolaos; Chouvarda, Ioanna

    2014-01-01

    An integral part of a system that manages medical data is the persistent storage engine. For almost twenty five years Relational Database Management Systems(RDBMS) were considered the obvious decision, yet today new technologies have emerged that require our attention as possible alternatives. Triplestores store information in terms of RDF triples without necessarily binding to a specific predefined structural model. In this paper we present an attempt to compare the performance of Apache JENA-Fuseki and the Virtuoso Universal Server 6 triplestores with that of MySQL 5.6 RDBMS for storing and retrieving medical information that it is communicated as RDF/XML ontology instances over a RESTful web service. The results show that the performance, calculated as average time of storing and retrieving instances, is significantly better using Virtuoso Server while MySQL performed better than Fuseki.

  18. Data Processing on Database Management Systems with Fuzzy Query

    NASA Astrophysics Data System (ADS)

    Şimşek, Irfan; Topuz, Vedat

    In this study, a fuzzy query tool (SQLf) for non-fuzzy database management systems was developed. In addition, samples of fuzzy queries were made by using real data with the tool developed in this study. Performance of SQLf was tested with the data about the Marmara University students' food grant. The food grant data were collected in MySQL database by using a form which had been filled on the web. The students filled a form on the web to describe their social and economical conditions for the food grant request. This form consists of questions which have fuzzy and crisp answers. The main purpose of this fuzzy query is to determine the students who deserve the grant. The SQLf easily found the eligible students for the grant through predefined fuzzy values. The fuzzy query tool (SQLf) could be used easily with other database system like ORACLE and SQL server.

  19. Computer-Aided Clinical Trial Recruitment Based on Domain-Specific Language Translation: A Case Study of Retinopathy of Prematurity

    PubMed Central

    2017-01-01

    Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping). PMID:29065644

  20. PACSY, a relational database management system for protein structure and chemical shift analysis

    PubMed Central

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  1. Vulnerability Assessment of IPv6 Websites to SQL Injection and Other Application Level Attacks

    PubMed Central

    Cho, Ying-Chiang; Pan, Jen-Yi

    2013-01-01

    Given the proliferation of internet connected devices, IPv6 has been proposed to replace IPv4. Aside from providing a larger address space which can be assigned to internet enabled devices, it has been suggested that the IPv6 protocol offers increased security due to the fact that with the large number of addresses available, standard IP scanning attacks will no longer become feasible. However, given the interest in attacking organizations rather than individual devices, most initial points of entry onto an organization's network and their attendant devices are visible and reachable through web crawling techniques, and, therefore, attacks on the visible application layer may offer ways to compromise the overall network. In this evaluation, we provide a straightforward implementation of a web crawler in conjunction with a benign black box penetration testing system and analyze the ease at which SQL injection attacks can be carried out. PMID:24574863

  2. A UML Profile for Developing Databases that Conform to the Third Manifesto

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki

    The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.

  3. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    NASA Astrophysics Data System (ADS)

    Potekhin, M.; ATLAS Collaboration

    2012-06-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.

  4. A DBTechNet course module on database SQL transactions for VET teachers training and higher education informatics education

    NASA Astrophysics Data System (ADS)

    Dervos, D. A.; Skourlas, C.; Laiho, M.

    2015-02-01

    "DBTech VET Teachers" project is Leonardo da Vinci Multilateral Transfer of Innovation Project co-financed by the European Commission's Lifelong Learning Programme. The aim of the project is to renew the teaching of database technologies in VET (Vocational Education and Training) institutes, on the basis of the current and real needs of ICT industry in Europe. Training of the VET teachers is done with the systems used in working life and they are taught to guide students to learning by verifying. In this framework, a course module on SQL transactions is prepared and offered. In this paper we present and briefly discuss some qualitative/quantitative data collected from its first pilot offering to an international audience in Greece during May-June 2013. The questionnaire/evaluation results, and the types of participants who have attended the course offering, are presented. Conclusions are also presented.

  5. GEOmetadb: powerful alternative search engine for the Gene Expression Omnibus

    PubMed Central

    Zhu, Yuelin; Davis, Sean; Stephens, Robert; Meltzer, Paul S.; Chen, Yidong

    2008-01-01

    The NCBI Gene Expression Omnibus (GEO) represents the largest public repository of microarray data. However, finding data in GEO can be challenging. We have developed GEOmetadb in an attempt to make querying the GEO metadata both easier and more powerful. All GEO metadata records as well as the relationships between them are parsed and stored in a local MySQL database. A powerful, flexible web search interface with several convenient utilities provides query capabilities not available via NCBI tools. In addition, a Bioconductor package, GEOmetadb that utilizes a SQLite export of the entire GEOmetadb database is also available, rendering the entire GEO database accessible with full power of SQL-based queries from within R. Availability: The web interface and SQLite databases available at http://gbnci.abcc.ncifcrf.gov/geo/. The Bioconductor package is available via the Bioconductor project. The corresponding MATLAB implementation is also available at the same website. Contact: yidong@mail.nih.gov PMID:18842599

  6. Vulnerability assessment of IPv6 websites to SQL injection and other application level attacks.

    PubMed

    Cho, Ying-Chiang; Pan, Jen-Yi

    2013-01-01

    Given the proliferation of internet connected devices, IPv6 has been proposed to replace IPv4. Aside from providing a larger address space which can be assigned to internet enabled devices, it has been suggested that the IPv6 protocol offers increased security due to the fact that with the large number of addresses available, standard IP scanning attacks will no longer become feasible. However, given the interest in attacking organizations rather than individual devices, most initial points of entry onto an organization's network and their attendant devices are visible and reachable through web crawling techniques, and, therefore, attacks on the visible application layer may offer ways to compromise the overall network. In this evaluation, we provide a straightforward implementation of a web crawler in conjunction with a benign black box penetration testing system and analyze the ease at which SQL injection attacks can be carried out.

  7. Abstraction of the Relational Model from a Department of Veterans Affairs DHCP Database: Bridging Theory and Working Application

    PubMed Central

    Levy, C.; Beauchamp, C.

    1996-01-01

    This poster describes the methods used and working prototype that was developed from an abstraction of the relational model from the VA's hierarchical DHCP database. Overlaying the relational model on DHCP permits multiple user views of the physical data structure, enhances access to the database by providing a link to commercial (SQL based) software, and supports a conceptual managed care data model based on primary and longitudinal patient care. The goal of this work was to create a relational abstraction of the existing hierarchical database; to construct, using SQL data definition language, user views of the database which reflect the clinical conceptual view of DHCP, and to allow the user to work directly with the logical view of the data using GUI based commercial software of their choosing. The workstation is intended to serve as a platform from which a managed care information model could be implemented and evaluated.

  8. Hosting and pulishing astronomical data in SQL databases

    NASA Astrophysics Data System (ADS)

    Galkin, Anastasia; Klar, Jochen; Riebe, Kristin; Matokevic, Gal; Enke, Harry

    2017-04-01

    In astronomy, terabytes and petabytes of data are produced by ground instruments, satellite missions and simulations. At Leibniz-Institute for Astrophysics Potsdam (AIP) we host and publish terabytes of cosmological simulation and observational data. The public archive at AIP has now reached a size of 60TB and growing and helps to produce numerous scientific papers. The web framework Daiquiri offers a dedicated web interface for each of the hosted scientific databases. Scientists all around the world run SQL queries which include specific astrophysical functions and get their desired data in reasonable time. Daiquiri supports the scientific projects by offering a number of administration tools such as database and user management, contact messages to the staff and support for organization of meetings and workshops. The webpages can be customized and the Wordpress integration supports the participating scientists in maintaining the documentation and the projects' news sections.

  9. Computer-Aided Clinical Trial Recruitment Based on Domain-Specific Language Translation: A Case Study of Retinopathy of Prematurity.

    PubMed

    Zhang, Yinsheng; Zhang, Guoming; Shang, Qian

    2017-01-01

    Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping).

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalili, Farid; Danilishin, Stefan; Mueller-Ebhardt, Helge

    We consider enhancing the sensitivity of future gravitational-wave detectors by using double optical spring. When the power, detuning and bandwidth of the two carriers are chosen appropriately, the effect of the double optical spring can be described as a 'negative inertia', which cancels the positive inertia of the test masses and thus increases their response to gravitational waves. This allows us to surpass the free-mass standard quantum limit (SQL) over a broad frequency band, through signal amplification, rather than noise cancellation, which has been the case for all broadband SQL-beating schemes so far considered for gravitational-wave detectors. The merit ofmore » such signal amplification schemes lies in the fact that they are less susceptible to optical losses than noise-cancellation schemes. We show that it is feasible to demonstrate such an effect with the Gingin High Optical Power Test Facility, and it can eventually be implemented in future advanced GW detectors.« less

  11. Realizing what's essential: a case study on integrating electronic journal management into a print-centric technical services department.

    PubMed

    Dollar, Daniel M; Gallagher, John; Glover, Janis; Marone, Regina Kenny; Crooker, Cynthia

    2007-04-01

    To support migration from print to electronic resources, the Cushing/Whitney Medical Library at Yale University reorganized its Technical Services Department to focus on managing electronic resources. The library hired consultants to help plan the changes and to present recommendations for integrating electronic resource management into every position. The library task force decided to focus initial efforts on the periodical collection. To free staff time to devote to electronic journals, most of the print subscriptions were switched to online only and new workflows were developed for e-journals. Staff learned new responsibilities such as activating e-journals, maintaining accurate holdings information in the online public access catalog and e-journals database ("electronic shelf reading"), updating the link resolver knowledgebase, and troubleshooting. All of the serials team members now spend significant amounts of time managing e-journals. The serials staff now spends its time managing the materials most important to the library's clientele (e-journals and databases). The team's proactive approach to maintenance work and rapid response to reported problems should improve patrons' experiences using e-journals. The library is taking advantage of new technologies such as an electronic resource management system, and library workflows and procedures will continue to evolve as technology changes.

  12. Realizing what's essential: a case study on integrating electronic journal management into a print-centric technicalservices department

    PubMed Central

    Dollar, Daniel M.; Gallagher, John; Glover, Janis; Marone, Regina Kenny; Crooker, Cynthia

    2007-01-01

    Objective: To support migration from print to electronic resources, the Cushing/Whitney Medical Library at Yale University reorganized its Technical Services Department to focus on managing electronic resources. Methods: The library hired consultants to help plan the changes and to present recommendations for integrating electronic resource management into every position. The library task force decided to focus initial efforts on the periodical collection. To free staff time to devote to electronic journals, most of the print subscriptions were switched to online only and new workflows were developed for e-journals. Results: Staff learned new responsibilities such as activating e-journals, maintaining accurate holdings information in the online public access catalog and e-journals database (“electronic shelf reading”), updating the link resolver knowledgebase, and troubleshooting. All of the serials team members now spend significant amounts of time managing e-journals. Conclusions: The serials staff now spends its time managing the materials most important to the library's clientele (e-journals and databases). The team's proactive approach to maintenance work and rapid response to reported problems should improve patrons' experiences using e-journals. The library is taking advantage of new technologies such as an electronic resource management system, and library workflows and procedures will continue to evolve as technology changes. PMID:17443247

  13. Selection of Electronic Resources.

    ERIC Educational Resources Information Center

    Weathers, Barbara

    1998-01-01

    Discusses the impact of electronic resources on collection development; selection of CD-ROMs, (platform, speed, video and sound, networking capability, installation and maintenance); selection of laser disks; and Internet evaluation (accuracy of content, authority, objectivity, currency, technical characteristics). Lists Web sites for evaluating…

  14. Use of poisons information resources and satisfaction with electronic products by Victorian emergency department staff.

    PubMed

    Luke, Stephen; Fountain, John S; Reith, David M; Braitberg, George; Cruickshank, Jaycen

    2014-10-01

    ED staff use a range of poisons information resources of varying type and quality. The present study aims to identify those resources utilised in the state of Victoria, Australia, and assess opinion of the most used electronic products. A previously validated self-administered survey was conducted in 15 EDs, with 10 questionnaires sent to each. The survey was then repeated following the provision of a 4-month period of access to Toxinz™, an Internet poisons information product novel to the region. The study was conducted from December 2010 to August 2011. There were 117 (78%) and 48 (32%) responses received from the first and second surveys, respectively, a 55% overall response rate. No statistically significant differences in professional group, numbers of poisoned patients seen or resource type accessed were identified between studies. The electronic resource most used in the first survey was Poisindex® (48.68%) and Toxinz™ (64.1%) in the second. There were statistically significant (P < 0.01) improvements in satisfaction in 26 of 42 questions between surveys, and no decrements. Although the majority of responders possessed mobile devices, less than half used them for poisons information but would do so if a reputable product was available. The order of poisons information sources most utilised was: consultation with a colleague, in-house protocols and electronic resources. There was a significant difference in satisfaction with electronic poisons information resources and a movement away from existing sources when choice was provided. Interest in increased use of mobile solutions was identified. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  15. Visualization of Vgi Data Through the New NASA Web World Wind Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Kilsedar, C. E.; Zamboni, G.

    2016-06-01

    GeoWeb 2.0, laying the foundations of Volunteered Geographic Information (VGI) systems, has led to platforms where users can contribute to the geographic knowledge that is open to access. Moreover, as a result of the advancements in 3D visualization, virtual globes able to visualize geographic data even on browsers emerged. However the integration of VGI systems and virtual globes has not been fully realized. The study presented aims to visualize volunteered data in 3D, considering also the ease of use aspects for general public, using Free and Open Source Software (FOSS). The new Application Programming Interface (API) of NASA, Web World Wind, written in JavaScript and based on Web Graphics Library (WebGL) is cross-platform and cross-browser, so that the virtual globe created using this API can be accessible through any WebGL supported browser on different operating systems and devices, as a result not requiring any installation or configuration on the client-side, making the collected data more usable to users, which is not the case with the World Wind for Java as installation and configuration of the Java Virtual Machine (JVM) is required. Furthermore, the data collected through various VGI platforms might be in different formats, stored in a traditional relational database or in a NoSQL database. The project developed aims to visualize and query data collected through Open Data Kit (ODK) platform and a cross-platform application, where data is stored in a relational PostgreSQL and NoSQL CouchDB databases respectively.

  16. Social media based NPL system to find and retrieve ARM data: Concept paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra

    Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra

    Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less

  18. Two interpenetrating Cu{sup II}/Ni{sup II}-coordinated polymers based on an unsymmetrical bifunctional N/O-tectonic: Syntheses, structures and magnetic properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yong-Liang; Department of Chemistry and Chemical Engineering, Shaanxi Key Laboratory of Comprehensive Utilization of Tailings Resources, Shang Luo University, Shang Luo 726000; Wu, Ya-Pan

    2015-03-15

    Two new interpenetrating Cu{sup II}/Ni{sup II} coordination polymers, based on a unsymmetrical bifunctional N/O-tectonic 3-(pyrid-4′-yl)-5-(4″-carbonylphenyl)-1,2,4-triazolyl (H{sub 2}pycz), ([Cu-(Hpycz){sub 2}]·2H{sub 2}O){sub n} (1) and ([Ni(Hpycz){sub 2}]·H{sub 2}O){sub n} (2), have been solvothermally synthesized and structure characterization. Single crystal X-ray analysis indicates that compound 1 shows 2-fold parallel interpenetrated 4{sup 4}-sql layers with the same handedness. The overall structure of 1 is achiral—in each layer of doubly interpenetrating nets, the two individual nets have the opposite handedness to the corresponding nets in the adjoining layers—while 2 features a rare 8-fold interpenetrating 6{sup 6}-dia network that belongs to class IIIa interpenetration. In addition,more » compounds 1 and 2 both show similar paramagnetic characteristic properties. - Graphical abstract: Two new Cu(II)/Ni(II) coordination polymers present 2D parallel 2-fold interpenetrated 4{sup 4}-sql layers and a rare 3D 8-fold interpenetrating 6{sup 6}-dia network. In addition, magnetic susceptibility measurements show similar paramagnetic characteristic for two complexes. - Highlights: • A new unsymmetrical bifunctional N/O-tectonic as 4-connected spacer. • A 2-fold parallel interpenetrated sql layer with the same handedness. • A rare 8-fold interpenetrating dia network (class IIIa)« less

  19. DB Dehydrogenase: an online integrated structural database on enzyme dehydrogenase.

    PubMed

    Nandy, Suman Kumar; Bhuyan, Rajabrata; Seal, Alpana

    2012-01-01

    Dehydrogenase enzymes are almost inevitable for metabolic processes. Shortage or malfunctioning of dehydrogenases often leads to several acute diseases like cancers, retinal diseases, diabetes mellitus, Alzheimer, hepatitis B & C etc. With advancement in modern-day research, huge amount of sequential, structural and functional data are generated everyday and widens the gap between structural attributes and its functional understanding. DB Dehydrogenase is an effort to relate the functionalities of dehydrogenase with its structures. It is a completely web-based structural database, covering almost all dehydrogenases [~150 enzyme classes, ~1200 entries from ~160 organisms] whose structures are known. It is created by extracting and integrating various online resources to provide the true and reliable data and implemented by MySQL relational database through user friendly web interfaces using CGI Perl. Flexible search options are there for data extraction and exploration. To summarize, sequence, structure, function of all dehydrogenases in one place along with the necessary option of cross-referencing; this database will be utile for researchers to carry out further work in this field. The database is available for free at http://www.bifku.in/DBD/

  20. A spatial-temporal system for dynamic cadastral management.

    PubMed

    Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie

    2006-03-01

    A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.

  1. Integrated Budget Office Toolbox

    NASA Technical Reports Server (NTRS)

    Rushing, Douglas A.; Blakeley, Chris; Chapman, Gerry; Robertson, Bill; Horton, Allison; Besser, Thomas; McCarthy, Debbie

    2010-01-01

    The Integrated Budget Office Toolbox (IBOT) combines budgeting, resource allocation, organizational funding, and reporting features in an automated, integrated tool that provides data from a single source for Johnson Space Center (JSC) personnel. Using a common interface, concurrent users can utilize the data without compromising its integrity. IBOT tracks planning changes and updates throughout the year using both phasing and POP-related (program-operating-plan-related) budget information for the current year, and up to six years out. Separating lump-sum funds received from HQ (Headquarters) into separate labor, travel, procurement, Center G&A (general & administrative), and servicepool categories, IBOT creates a script that significantly reduces manual input time. IBOT also manages the movement of travel and procurement funds down to the organizational level and, using its integrated funds management feature, helps better track funding at lower levels. Third-party software is used to create integrated reports in IBOT that can be generated for plans, actuals, funds received, and other combinations of data that are currently maintained in the centralized format. Based on Microsoft SQL, IBOT incorporates generic budget processes, is transportable, and is economical to deploy and support.

  2. SGDB: a database of synthetic genes re-designed for optimizing protein over-expression.

    PubMed

    Wu, Gang; Zheng, Yuanpu; Qureshi, Imran; Zin, Htar Thant; Beck, Tyler; Bulka, Blazej; Freeland, Stephen J

    2007-01-01

    Here we present the Synthetic Gene Database (SGDB): a relational database that houses sequences and associated experimental information on synthetic (artificially engineered) genes from all peer-reviewed studies published to date. At present, the database comprises information from more than 200 published experiments. This resource not only provides reference material to guide experimentalists in designing new genes that improve protein expression, but also offers a dataset for analysis by bioinformaticians who seek to test ideas regarding the underlying factors that influence gene expression. The SGDB was built under MySQL database management system. We also offer an XML schema for standardized data description of synthetic genes. Users can access the database at http://www.evolvingcode.net/codon/sgdb/index.php, or batch downloads all information through XML files. Moreover, users may visually compare the coding sequences of a synthetic gene and its natural counterpart with an integrated web tool at http://www.evolvingcode.net/codon/sgdb/aligner.php, and discuss questions, findings and related information on an associated e-forum at http://www.evolvingcode.net/forum/viewforum.php?f=27.

  3. ESTs and EST-linked polymorphisms for genetic mapping and phylogenetic reconstruction in the guppy, Poecilia reticulata

    PubMed Central

    Dreyer, Christine; Hoffmann, Margarete; Lanz, Christa; Willing, Eva-Maria; Riester, Markus; Warthmann, Norman; Sprecher, Andrea; Tripathi, Namita; Henz, Stefan R; Weigel, Detlef

    2007-01-01

    Background The guppy, Poecilia reticulata, is a well-known model organism for studying inheritance and variation of male ornamental traits as well as adaptation to different river habitats. However, genomic resources for studying this important model were not previously widely available. Results With the aim of generating molecular markers for genetic mapping of the guppy, cDNA libraries were constructed from embryos and different adult organs to generate expressed sequence tags (ESTs). About 18,000 ESTs were annotated according to BLASTN and BLASTX results and the sequence information from the 3' UTRs was exploited to generate PCR primers for re-sequencing of genomic DNA from different wild type strains. By comparison of EST-linked genomic sequences from at least four different ecotypes, about 1,700 polymorphisms were identified, representing about 400 distinct genes. Two interconnected MySQL databases were built to organize the ESTs and markers, respectively. A robust phylogeny of the guppy was reconstructed, based on 10 different nuclear genes. Conclusion Our EST and marker databases provide useful tools for genetic mapping and phylogenetic studies of the guppy. PMID:17686157

  4. A Study on Developing Evaluation Criteria for Electronic Resources in Evaluation Indicators of Libraries

    ERIC Educational Resources Information Center

    Noh, Younghee

    2010-01-01

    This study aimed to improve the current state of electronic resource evaluation in libraries. While the use of Web DB, e-book, e-journal, and other e-resources such as CD-ROM, DVD, and micro materials is increasing in libraries, their use is not comprehensively factored into the general evaluation of libraries and may diminish the reliability of…

  5. Electronic Commerce Resource Centers. An Industry--University Partnership.

    ERIC Educational Resources Information Center

    Gulledge, Thomas R.; Sommer, Rainer; Tarimcilar, M. Murat

    1999-01-01

    Electronic Commerce Resource Centers focus on transferring emerging technologies to small businesses through university/industry partnerships. Successful implementation hinges on a strategic operating plan, creation of measurable value for customers, investment in customer-targeted training, and measurement of performance outputs. (SK)

  6. Vulnerabilities in Bytecode Removed by Analysis, Nuanced Confinement and Diversification (VIBRANCE)

    DTIC Science & Technology

    2015-06-01

    VIBRANCE tool starts with a vulnerable Java application and automatically hardens it against SQL injection, OS command injection, file path traversal...7 2.2 Java Front End...7 2.2.2 Java Byte Code Parser

  7. Transition metal coordination polymers based on tetrabromoterephthalic and bis(imidazole) ligands: Syntheses, structures, topological analysis and photoluminescence properties

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaowei; Xing, Peiqi; Geng, Xiujuan; Sun, Daofeng; Xiao, Zhenyu; Wang, Lei

    2015-09-01

    Eight new coordination polymers (CPs), namely, [Zn(1,2-mbix)(tbtpa)]n (1), [Co(1,2-mbix)(tbtpa)]n (2), [CdCl(1,2-mbix)(tbtpa)0.5]n (3), {[Cd(1,2-bix)(tbtpa)]·H2O}n (4), {[Cd0.5(1,2-bix)(tbtpa)0.5]·H2O}n (5), {[Co0.5(1,2-bix)(tbtpa)0.5]·2H2O}n (6), {[Co(1,2-bix)(tbtpa)]·H2O}n (7) and {[Co(1,2-bix)(tbtpa)]·Diox·2H2O}n (8), were synthesized under solvothermal conditions based on mix-ligand strategy (H2tbtpa=tetrabromoterephthalic acid and 1,2-mbix=1,2-bis((2-methyl-1H-imidazol-1-yl)methyl)benzene, 1,2-bix=1,2-bis(imidazol-1-ylmethyl)benzene). All of the CPs have been structurally characterized by single-crystal X-ray diffraction analyses and further characterized by elemental analyses, IR spectroscopy, powder X-ray diffraction (PXRD), and thermogravimetric analyses (TGA). X-ray diffraction analyses show that 1 and 2 are isotypics which have 2D highly undulated networks with (4,4)-sql topology with the existence of C-H ⋯Br interactions; for 3, it has a 2D planar network with (4,4)-sql topology with the occurrence of C-H ⋯Cl interactions other than C-H ⋯Br interactions; 4 shows a 3D 2-fold interpenetrated nets with rare 65·8-mok topology which has a self-catention property. As the same case as 1 and 2, 5 and 6 are also isostructural with planar layers with 44-sql topology which further assembled into 3D supramolecular structure through the interdigitated stacking fashion and the C-Br ⋯Cph interactions. As for 7, it has a 2D slightly undulated networks with (4,4)-sql topology which has one dimension channel. While 8 has a 2-fold interpenetrated networks with (3,4)-connect jeb topology with point symbol {63}{65·8}. And their structures can be tuned by conformations of bis(imidazol) ligands and solvent mixture. Besides, the TGA properties for all compounds and the luminescent properties for 1, 3, 4, 5 are discussed in detail.

  8. A future Outlook: Web based Simulation of Hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Islam, A. S.; Piasecki, M.

    2003-12-01

    Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML

  9. Advanced Query and Data Mining Capabilities for MaROS

    NASA Technical Reports Server (NTRS)

    Wang, Paul; Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Hy, Franklin H.

    2013-01-01

    The Mars Relay Operational Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay network. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. As part of MaROS, the innovators have developed and implemented a feature set that operates on several levels of the software architecture. This new feature is an advanced querying capability through either the Web-based user interface, or through a back-end REST interface to access all of the data gathered from the network. This software is not meant to replace the REST interface, but to augment and expand the range of available data. The current REST interface provides specific data that is used by the MaROS Web application to display and visualize the information; however, the returned information from the REST interface has typically been pre-processed to return only a subset of the entire information within the repository, particularly only the information that is of interest to the GUI (graphical user interface). The new, advanced query and data mining capabilities allow users to retrieve the raw data and/or to perform their own data processing. The query language used to access the repository is a restricted subset of the structured query language (SQL) that can be built safely from the Web user interface, or entered as freeform SQL by a user. The results are returned in a CSV (Comma Separated Values) format for easy exporting to third party tools and applications that can be used for data mining or user-defined visualization and interpretation. This is the first time that a service is capable of providing access to all cross-project relay data from a single Web resource. Because MaROS contains the data for a variety of missions from the Mars network, which span both NASA and ESA, the software also establishes an access control list (ACL) on each data record in the database repository to enforce user access permissions through a multilayered approach.

  10. DRUMS: Disk Repository with Update Management and Select option for high throughput sequencing data

    PubMed Central

    2014-01-01

    Background New technologies for analyzing biological samples, like next generation sequencing, are producing a growing amount of data together with quality scores. Moreover, software tools (e.g., for mapping sequence reads), calculating transcription factor binding probabilities, estimating epigenetic modification enriched regions or determining single nucleotide polymorphism increase this amount of position-specific DNA-related data even further. Hence, requesting data becomes challenging and expensive and is often implemented using specialised hardware. In addition, picking specific data as fast as possible becomes increasingly important in many fields of science. The general problem of handling big data sets was addressed by developing specialized databases like HBase, HyperTable or Cassandra. However, these database solutions require also specialized or distributed hardware leading to expensive investments. To the best of our knowledge, there is no database capable of (i) storing billions of position-specific DNA-related records, (ii) performing fast and resource saving requests, and (iii) running on a single standard computer hardware. Results Here, we present DRUMS (Disk Repository with Update Management and Select option), satisfying demands (i)-(iii). It tackles the weaknesses of traditional databases while handling position-specific DNA-related data in an efficient manner. DRUMS is capable of storing up to billions of records. Moreover, it focuses on optimizing relating single lookups as range request, which are needed permanently for computations in bioinformatics. To validate the power of DRUMS, we compare it to the widely used MySQL database. The test setting considers two biological data sets. We use standard desktop hardware as test environment. Conclusions DRUMS outperforms MySQL in writing and reading records by a factor of two up to a factor of 10000. Furthermore, it can work with significantly larger data sets. Our work focuses on mid-sized data sets up to several billion records without requiring cluster technology. Storing position-specific data is a general problem and the concept we present here is a generalized approach. Hence, it can be easily applied to other fields of bioinformatics. PMID:24495746

  11. A Grid Metadata Service for Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.

  12. Revolution or Revelation? Acquisitions for the Digital Library

    ERIC Educational Resources Information Center

    Morris, Kathleen; Larson, Betsy

    2006-01-01

    Libraries are responding to customer preferences for electronic research materials through the acquisition and management of these products. Electronic resources have significantly different characteristics than print resources when it comes to technical services management. This paper addresses aspects of a corporate research library's evaluation…

  13. A mobile field-work data collection system for the wireless era of health surveillance.

    PubMed

    Forsell, Marianne; Sjögren, Petteri; Renard, Matthew; Johansson, Olle

    2011-03-01

    In many countries or regions the capacity of health care resources is below the needs of the population and new approaches for health surveillance are needed. Innovative projects, utilizing wireless communication technology, contribute to reliable methods for field-work data collection and reporting to databases. The objective was to describe a new version of a wireless IT-support system for field-work data collection and administration. The system requirements were drawn from the design objective and translated to system functions. The system architecture was based on fieldwork experiences and administrative requirements. The Smartphone devices were HTC Touch Diamond2s, while the system was based on a platform with Microsoft .NET components, and a SQL Server 2005 with Microsoft Windows Server 2003 operating system. The user interfaces were based on .NET programming, and Microsoft Windows Mobile operating system. A synchronization module enabled download of field data to the database, via a General Packet Radio Services (GPRS) to a Local Area Network (LAN) interface. The field-workers considered the here-described applications user-friendly and almost self-instructing. The office administrators considered that the back-office interface facilitated retrieval of health reports and invoice distribution. The current IT-support system facilitates short lead times from fieldwork data registration to analysis, and is suitable for various applications. The advantages of wireless technology, and paper-free data administration need to be increasingly emphasized in development programs, in order to facilitate reliable and transparent use of limited resources.

  14. BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources.

    PubMed

    Lim, Jeongheui; Kim, Sang-Yoon; Kim, Sungmin; Eo, Hae-Seok; Kim, Chang-Bae; Paek, Woon Kee; Kim, Won; Bhak, Jong

    2009-12-03

    DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org.

  15. A computational genomics pipeline for prokaryotic sequencing projects.

    PubMed

    Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King

    2010-08-01

    New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.

  16. Electronic Resources: Selection and Bibliographic Control.

    ERIC Educational Resources Information Center

    Pattie, Ling-yuh W., Ed.; Cox, Bonnie Jean, Ed.

    This book is a baseline guide for professionals and library school students on issues that concern the selection and bibliographic control of electronic resources, from both conceptual and pragmatic standpoints. The book includes the following articles: (1) "Foreward" (Lois Mai Chan); (2) "Introduction" (Ling-yuh W. (Miko)…

  17. Licensing and Negotiations for Electronic Content

    ERIC Educational Resources Information Center

    Crawford, Amy R.

    2008-01-01

    This article provides an overview of the basic characteristics of database, or eContent, license agreements, defines general licensing terms, maps the anatomy of an electronic resources subscription agreement, and discusses negotiating skills and techniques for library staff. (Contains a list of additional resources and a sample agreement.)

  18. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...-compatible format. All databases must be supported with adequate documentation on data attributes, SQL...

  19. SAMSON Technology Demonstrator

    DTIC Science & Technology

    2014-06-01

    requested. The SAMSON TD was testing with two different policy engines: 1. A custom XACML-based element matching engine using a MySQL database for...performed during the course of the event. Full information protection across the sphere of access management, information protection and auditing was in...

  20. Business Demands for Web-Related Skills as Compared to Other Computer Skills.

    ERIC Educational Resources Information Center

    Groneman, Nancy

    2000-01-01

    Analysis of 23,704 want ads for computer-related jobs revealed that the most frequently mentioned skills were UNIX, SQL programming, and computer security. Curriculum implications were derived from the most desired and less frequently mentioned skills. (SK)

  1. Scale-Independent Relational Query Processing

    DTIC Science & Technology

    2013-10-04

    source options are also available, including Postgresql, MySQL , and SQLite. These mod- ern relational databases are generally very complex software systems...and Their Application to Data Stream Management. IGI Global, 2010. [68] George Reese. Database Programming with JDBC and Java , Second Edition. Ed. by

  2. Methods, Knowledge Support, and Experimental Tools for Modeling

    DTIC Science & Technology

    2006-10-01

    open source software entities: the PostgreSQL relational database management system (http://www.postgres.org), the Apache web server (http...past. The revision control system allows the program to capture disagreements, and allows users to explore the history of such disagreements by

  3. Introduction of home electronics for the future

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Hideyuki; Shirai, Iwao

    Development of electronics has accelerated the automation and labor saving at factories and offices. Home electronics is also expected to be needed more and more in Japan towards the 21st century, as the advanced information society and the elderly society will be accelerated, and women's participation in social affairs will be increased. Resources Council, which is the advisory organ of the Minister of State for Science and Technology, forecast to what extent home electronics will be popularized by the year of 2010. The Council expected to promote home electronics, because resource and energy saving should be accelerated and people should enjoy much more their individual lives at home.

  4. Electronic information and clinical decision support for prescribing: state of play in Australian general practice

    PubMed Central

    Robertson, Jane; Moxey, Annette J; Newby, David A; Gillies, Malcolm B; Williamson, Margaret; Pearson, Sallie-Anne

    2011-01-01

    Background. Investments in eHealth worldwide have been mirrored in Australia, with >90% of general practices computerized. Recent eHealth incentives promote the use of up to date electronic information sources relevant to general practice with flexibility in mode of access. Objective. To determine GPs’ access to and use of electronic information sources and computerized clinical decision support systems (CDSSs) for prescribing. Methods. Semi-structured interviews were conducted with 18 experienced GPs and nine GP trainees in New South Wales, Australia in 2008. A thematic analysis of interview transcripts was undertaken. Results. Information needs varied with clinical experience, and people resources (specialists, GP peers and supervisors for trainees) were often preferred over written formats. Experienced GPs used a small number of electronic resources and accessed them infrequently. Familiarity from training and early clinical practice and easy access were dominant influences on resource use. Practice time constraints meant relevant information needed to be readily accessible during consultations, requiring integration or direct access from prescribing software. Quality of electronic resource content was assumed and cost a barrier for some GPs. Conclusions. The current Australian practice incentives do not prescribe which information resources GPs should use. Without integration into practice computing systems, uptake and routine use seem unlikely. CDSS developments must recognize the time pressures of practice, preference for integration and cost concerns. Minimum standards are required to ensure that high-quality information resources are integrated and regularly updated. Without standards, the anticipated benefits of computerization on patient safety and health outcomes will be uncertain. PMID:21109619

  5. The Acquisition and Management of Electronic Resources: Can Use Justify Cost?

    ERIC Educational Resources Information Center

    Koehn, Shona L.; Hawamdeh, Suliman

    2010-01-01

    As library collections increasingly become digital, libraries are faced with many challenges regarding the acquisition and management of electronic resources. Some of these challenges include copyright and fair use, the first-sale doctrine, licensing versus ownership, digital preservation, long-term archiving, and, most important, the issue of…

  6. 18 CFR 35.7 - Electronic filing requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Electronic filing requirements. 35.7 Section 35.7 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT FILING OF RATE SCHEDULES AND TARIFFS Application...

  7. 77 FR 52051 - Proposed Information Collection for Public Comment: Electronic Stakeholder Survey-Office for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ... and resources by working across public, private, and civil sectors to further HUD's mission. IPI works... alignment of cross-sector resources and ideas. Members of affected public: Individuals. Estimation of the... Collection for Public Comment: Electronic Stakeholder Survey--Office for International and Philanthropic...

  8. Somewhere over the Verde Rainbow

    ERIC Educational Resources Information Center

    Ekart, Donna F.

    2008-01-01

    When the electronic resource management system (ERM) at Kansas State University Libraries suffered a horrible data loss, the "contract db" presented a challenge for the librarians responsible for electronic resources. It was a decent data repository, but it had no ability to manage the tangled process of licensing, acquiring, activating,…

  9. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    NASA Astrophysics Data System (ADS)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  10. WebDB Component Builder - Lessons Learned

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macedo, C.

    2000-02-15

    Oracle WebDB is the easiest way to produce web enabled lightweight and enterprise-centric applications. This concept from Oracle has tantalized our taste for simplistic web development by using a purely web based tool that lives nowhere else but in the database. The use of online wizards, templates, and query builders, which produces PL/SQL behind the curtains, can be used straight ''out of the box'' by both novice and seasoned developers. The topic of this presentation will introduce lessons learned by developing and deploying applications built using the WebDB Component Builder in conjunction with custom PL/SQL code to empower a hybridmore » application. There are two kinds of WebDB components: those that display data to end users via reporting, and those that let end users update data in the database via entry forms. The presentation will also discuss various methods within the Component Builder to enhance the applications pushed to the desktop. The demonstrated example is an application entitled HOME (Helping Other's More Effectively) that was built to manage a yearly United Way Campaign effort. Our task was to build an end to end application which could manage approximately 900 non-profit agencies, an average of 4,100 individual contributions, and $1.2 million dollars. Using WebDB, the shell of the application was put together in a matter of a few weeks. However, we did encounter some hurdles that WebDB, in it's stage of infancy (v2.0), could not solve for us directly. Together with custom PL/SQL, WebDB's Component Builder became a powerful tool that enabled us to produce a very flexible hybrid application.« less

  11. Dielectrophoretic trapping of multilayer DNA origami nanostructures and DNA origami-induced local destruction of silicon dioxide.

    PubMed

    Shen, Boxuan; Linko, Veikko; Dietz, Hendrik; Toppari, J Jussi

    2015-01-01

    DNA origami is a widely used method for fabrication of custom-shaped nanostructures. However, to utilize such structures, one needs to controllably position them on nanoscale. Here we demonstrate how different types of 3D scaffolded multilayer origamis can be accurately anchored to lithographically fabricated nanoelectrodes on a silicon dioxide substrate by DEP. Straight brick-like origami structures, constructed both in square (SQL) and honeycomb lattices, as well as curved "C"-shaped and angular "L"-shaped origamis were trapped with nanoscale precision and single-structure accuracy. We show that the positioning and immobilization of all these structures can be realized with or without thiol-linkers. In general, structural deformations of the origami during the DEP trapping are highly dependent on the shape and the construction of the structure. The SQL brick turned out to be the most robust structure under the high DEP forces, and accordingly, its single-structure trapping yield was also highest. In addition, the electrical conductivity of single immobilized plain brick-like structures was characterized. The electrical measurements revealed that the conductivity is negligible (insulating behavior). However, we observed that the trapping process of the SQL brick equipped with thiol-linkers tended to induce an etched "nanocanyon" in the silicon dioxide substrate. The nanocanyon was formed exactly between the electrodes, that is, at the location of the DEP-trapped origami. The results show that the demonstrated DEP-trapping technique can be readily exploited in assembling and arranging complex multilayered origami geometries. In addition, DNA origamis could be utilized in DEP-assisted deformation of the substrates onto which they are attached. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. a Webgis for the Knowledge and Conservation of the Historical Wall Structures of the 13TH-18TH Centuries

    NASA Astrophysics Data System (ADS)

    Vacca, G.; Pili, D.; Fiorino, D. R.; Pintus, V.

    2017-05-01

    The presented work is part of the research project, titled "Tecniche murarie tradizionali: conoscenza per la conservazione ed il miglioramento prestazionale" (Traditional building techniques: from knowledge to conservation and performance improvement), with the purpose of studying the building techniques of the 13th-18th centuries in the Sardinia Region (Italy) for their knowledge, conservation, and promotion. The end purpose of the entire study is to improve the performance of the examined structures. In particular, the task of the authors within the research project was to build a WebGIS to manage the data collected during the examination and study phases. This infrastructure was entirely built using Open Source software. The work consisted of designing a database built in PostgreSQL and its spatial extension PostGIS, which allows to store and manage feature geometries and spatial data. The data input is performed via a form built in HTML and PHP. The HTML part is based on Bootstrap, an open tools library for websites and web applications. The implementation of this template used both PHP and Javascript code. The PHP code manages the reading and writing of data to the database, using embedded SQL queries. As of today, we surveyed and archived more than 300 buildings, belonging to three main macro categories: fortification architectures, religious architectures, residential architectures. The masonry samples investigated in relation to the construction techniques are more than 150. The database is published on the Internet as a WebGIS built using the Leaflet Javascript open libraries, which allows creating map sites with background maps and navigation, input and query tools. This too uses an interaction of HTML, Javascript, PHP and SQL code.

  13. Changes in Exercise Data Management

    NASA Technical Reports Server (NTRS)

    Buxton, R. E.; Kalogera, K. L.; Hanson, A. M.

    2018-01-01

    The suite of exercise hardware aboard the International Space Station (ISS) generates an immense amount of data. The data collected from the treadmill, cycle ergometer, and resistance strength training hardware are basic exercise parameters (time, heart rate, speed, load, etc.). The raw data are post processed in the laboratory and more detailed parameters are calculated from each exercise data file. Updates have recently been made to how this valuable data are stored, adding an additional level of data security, increasing data accessibility, and resulting in overall increased efficiency of medical report delivery. Questions regarding exercise performance or how exercise may influence other variables of crew health frequently arise within the crew health care community. Inquiries over the health of the exercise hardware often need quick analysis and response to ensure the exercise system is operable on a continuous basis. Consolidating all of the exercise system data in a single repository enables a quick response to both the medical and engineering communities. A SQL server database is currently in use, and provides a secure location for all of the exercise data starting at ISS Expedition 1 - current day. The database has been structured to update derived metrics automatically, making analysis and reporting available within minutes of dropping the inflight data it into the database. Commercial tools were evaluated to help aggregate and visualize data from the SQL database. The Tableau software provides manageable interface, which has improved the laboratory's output time of crew reports by 67%. Expansion of the SQL database to be inclusive of additional medical requirement metrics, addition of 'app-like' tools for mobile visualization, and collaborative use (e.g. operational support teams, research groups, and International Partners) of the data system is currently being explored.

  14. SAGES: A Suite of Freely-Available Software Tools for Electronic Disease Surveillance in Resource-Limited Settings

    PubMed Central

    Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.

    2011-01-01

    Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations. PMID:21572957

  15. The Electron Microscopy Outreach Program: A Web-based resource for research and education.

    PubMed

    Sosinsky, G E; Baker, T S; Hand, G; Ellisman, M H

    1999-01-01

    We have developed a centralized World Wide Web (WWW)-based environment that serves as a resource of software tools and expertise for biological electron microscopy. A major focus is molecular electron microscopy, but the site also includes information and links on structural biology at all levels of resolution. This site serves to help integrate or link structural biology techniques in accordance with user needs. The WWW site, called the Electron Microscopy (EM) Outreach Program (URL: http://emoutreach.sdsc.edu), provides scientists with computational and educational tools for their research and edification. In particular, we have set up a centralized resource containing course notes, references, and links to image analysis and three-dimensional reconstruction software for investigators wanting to learn about EM techniques either within or outside of their fields of expertise. Copyright 1999 Academic Press.

  16. Comparison of response times of a mobile-web EHRs system using PHP and JSP languages.

    PubMed

    De la Torre-Díez, Isabel; Antón-Rodríguez, Míriam; Díaz-Pernas, Francisco Javier; Perozo-Rondón, Freddy José

    2012-12-01

    Performance evaluation is highly important in the Electronic Health Records (EHRs) system implementation. Response time's measurement can be considered as one manner to make that evaluation. In the e-health field, after the creation of EHRs available through different platforms such as Web and/or mobile, a performance evaluation is necessary. The operation of the system in the right way is essential. In this paper, a comparison of the response times for the MEHRmobile system is presented. The first version uses PHP language with a MySQL database and the second one employs JSP with an eXist database. Both versions have got the same functionalities. In addition to the technological aspects, a significant difference is the way the information is stored. The main goal of this paper is choosing the version which offers better response times. We have created a new benchmark to calculate the response times. Better results have been obtained for the PHP version. Nowadays, this version is being used for specialists from Fundación Intras, Spain.

  17. Orthos, an alarm system for the ALICE DAQ operations

    NASA Astrophysics Data System (ADS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  18. Geoinformatics paves the way for a zoo information system

    NASA Astrophysics Data System (ADS)

    Michel, Ulrich

    2008-10-01

    The use of modern electronic media offers new ways of (environmental) knowledge transfer. All kind of information can be made quickly available as well as queryable and can be processed individually. The Institute for Geoinformatics and Remote Sensing (IGF) in collaboration with the Osnabrueck Zoo, is developing a zoo information system, especially for new media (e.g. mobile devices), which provides information about the animals living there, their natural habitat and endangerment status. Thereby multimedia information is being offered to the zoo visitors. The implementation of the 2D/3D components is realized by modern database and Mapserver technologies. Among other technologies, the VRML (Virtual Reality Modeling Language) standard is used for the realization of the 3D visualization so that it can be viewed in every conventional web browser. Also, a mobile information system for Pocket PCs, Smartphones and Ultra Mobile PCs (UMPC) is being developed. All contents, including the coordinates, are stored in a PostgreSQL database. The data input, the processing and other administrative operations are executed by a content management system (CMS).

  19. Relational Algebra and SQL: Better Together

    ERIC Educational Resources Information Center

    McMaster, Kirby; Sambasivam, Samuel; Hadfield, Steven; Wolthuis, Stuart

    2013-01-01

    In this paper, we describe how database instructors can teach Relational Algebra and Structured Query Language together through programming. Students write query programs consisting of sequences of Relational Algebra operations vs. Structured Query Language SELECT statements. The query programs can then be run interactively, allowing students to…

  20. Ontological Approach to Military Knowledge Modeling and Management

    DTIC Science & Technology

    2004-03-01

    federated search mechanism has to reformulate user queries (expressed using the ontology) in the query languages of the different sources (e.g. SQL...ontologies as a common terminology – Unified query to perform federated search • Query processing – Ontology mapping to sources reformulate queries

Top