BioMart Central Portal: an open database network for the biological community
Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Hand-held computer operating system program for collection of resident experience data.
Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J
2000-11-01
To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.
Centralized database for interconnection system design. [for spacecraft
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.
BioMart Central Portal: an open database network for the biological community.
Guberman, Jonathan M; Ai, J; Arnaiz, O; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J; Di Génova, A; Forbes, Simon; Fujisawa, T; Gadaleta, E; Goodstein, D M; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D T; Wong-Erasmus, Marie; Yao, L; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities.
Turning Access into a web-enabled secure information system for clinical trials.
Dongquan Chen; Chen, Wei-Bang; Soong, Mayhue; Soong, Seng-Jaw; Orthner, Helmuth F
2009-08-01
Organizations that have limited resources need to conduct clinical studies in a cost-effective, but secure way. Clinical data residing in various individual databases need to be easily accessed and secured. Although widely available, digital certification, encryption, and secure web server, have not been implemented as widely, partly due to a lack of understanding of needs and concerns over issues such as cost and difficulty in implementation. The objective of this study was to test the possibility of centralizing various databases and to demonstrate ways of offering an alternative to a large-scale comprehensive and costly commercial product, especially for simple phase I and II trials, with reasonable convenience and security. We report a working procedure to transform and develop a standalone Access database into a secure Web-based secure information system. For data collection and reporting purposes, we centralized several individual databases; developed, and tested a web-based secure server using self-issued digital certificates. The system lacks audit trails. The cost of development and maintenance may hinder its wide application. The clinical trial databases scattered in various departments of an institution could be centralized into a web-enabled secure information system. The limitations such as the lack of a calendar and audit trail can be partially addressed with additional programming. The centralized Web system may provide an alternative to a comprehensive clinical trial management system.
Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela
2012-01-01
In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.
The relational clinical database: a possible solution to the star wars in registry systems.
Michels, D K; Zamieroski, M
1990-12-01
In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.
Adopting a corporate perspective on databases. Improving support for research and decision making.
Meistrell, M; Schlehuber, C
1996-03-01
The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.
A DICOM based radiotherapy plan database for research collaboration and reporting
NASA Astrophysics Data System (ADS)
Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.
2014-03-01
Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.
75 FR 60415 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... computer systems and networks. This information collection is required to obtain the necessary data... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible...
Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H
2010-01-01
The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.
Geer, Lewis Y.; Marchler-Bauer, Aron; Geer, Renata C.; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H.
2010-01-01
The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI’s Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets. PMID:19854944
[Research on Zhejiang blood information network and management system].
Yan, Li-Xing; Xu, Yan; Meng, Zhong-Hua; Kong, Chang-Hong; Wang, Jian-Min; Jin, Zhen-Liang; Wu, Shi-Ding; Chen, Chang-Shui; Luo, Ling-Fei
2007-02-01
This research was aimed to develop the first level blood information centralized database and real time communication network at a province area in China. Multiple technology like local area network database separate operation, real time data concentration and distribution mechanism, allopatric backup, and optical fiber virtual private network (VPN) were used. As a result, the blood information centralized database and management system were successfully constructed, which covers all the Zhejiang province, and the real time exchange of blood data was realised. In conclusion, its implementation promote volunteer blood donation and ensure the blood safety in Zhejiang, especially strengthen the quick response to public health emergency. This project lays the first stone of centralized test and allotment among blood banks in Zhejiang, and can serve as a reference of contemporary blood bank information systems in China.
Centralized Data Management in a Multicountry, Multisite Population-based Study.
Rahman, Qazi Sadeq-ur; Islam, Mohammad Shahidul; Hossain, Belal; Hossain, Tanvir; Connor, Nicholas E; Jaman, Md Jahiduj; Rahman, Md Mahmudur; Ahmed, A S M Nawshad Uddin; Ahmed, Imran; Ali, Murtaza; Moin, Syed Mamun Ibne; Mullany, Luke; Saha, Samir K; El Arifeen, Shams
2016-05-01
A centralized data management system was developed for data collection and processing for the Aetiology of Neonatal Infection in South Asia (ANISA) study. ANISA is a longitudinal cohort study involving neonatal infection surveillance and etiology detection in multiple sites in South Asia. The primary goal of designing such a system was to collect and store data from different sites in a standardized way to pool the data for analysis. We designed the data management system centrally and implemented it to enable data entry at individual sites. This system uses validation rules and audit that reduce errors. The study sites employ a dual data entry method to minimize keystroke errors. They upload collected data weekly to a central server via internet to create a pooled central database. Any inconsistent data identified in the central database are flagged and corrected after discussion with the relevant site. The ANISA Data Coordination Centre in Dhaka provides technical support for operations, maintenance and updating the data management system centrally. Password-protected login identifications and audit trails are maintained for the management system to ensure the integrity and safety of stored data. Centralized management of the ANISA database helps to use common data capture forms (DCFs), adapted to site-specific contextual requirements. DCFs and data entry interfaces allow on-site data entry. This reduces the workload as DCFs do not need to be shipped to a single location for entry. It also improves data quality as all collected data from ANISA goes through the same quality check and cleaning process.
Database System Design and Implementation for Marine Air-Traffic-Controller Training
2017-06-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. DATABASE SYSTEM DESIGN AND...thesis 4. TITLE AND SUBTITLE DATABASE SYSTEM DESIGN AND IMPLEMENTATION FOR MARINE AIR-TRAFFIC-CONTROLLER TRAINING 5. FUNDING NUMBERS 6. AUTHOR(S...12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) This project focused on the design , development, and implementation of a centralized
XML: James Webb Space Telescope Database Issues, Lessons, and Status
NASA Technical Reports Server (NTRS)
Detter, Ryan; Mooney, Michael; Fatig, Curtis
2003-01-01
This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.
Small Business Innovations (Integrated Database)
NASA Technical Reports Server (NTRS)
1992-01-01
Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.
NASA Technical Reports Server (NTRS)
Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang
2009-01-01
Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.
BIO-Plex Information System Concept
NASA Technical Reports Server (NTRS)
Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)
1999-01-01
This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.
7 CFR 274.3 - Retailer management.
Code of Federal Regulations, 2012 CFR
2012-01-01
... retailer, and it must include acceptable privacy and security features. Such systems shall only be... terminals that are capable of relaying electronic transactions to a central database computer for... specifications prior to implementation of the EBT system to enable third party processors to access the database...
Code of Federal Regulations, 2012 CFR
2012-10-01
... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...
Code of Federal Regulations, 2010 CFR
2010-10-01
... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...
Code of Federal Regulations, 2011 CFR
2011-10-01
... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...
Methods for structuring scientific knowledge from many areas related to aging research.
Zhavoronkov, Alex; Cantor, Charles R
2011-01-01
Aging and age-related disease represents a substantial quantity of current natural, social and behavioral science research efforts. Presently, no centralized system exists for tracking aging research projects across numerous research disciplines. The multidisciplinary nature of this research complicates the understanding of underlying project categories, the establishment of project relations, and the development of a unified project classification scheme. We have developed a highly visual database, the International Aging Research Portfolio (IARP), available at AgingPortfolio.org to address this issue. The database integrates information on research grants, peer-reviewed publications, and issued patent applications from multiple sources. Additionally, the database uses flexible project classification mechanisms and tools for analyzing project associations and trends. This system enables scientists to search the centralized project database, to classify and categorize aging projects, and to analyze the funding aspects across multiple research disciplines. The IARP is designed to provide improved allocation and prioritization of scarce research funding, to reduce project overlap and improve scientific collaboration thereby accelerating scientific and medical progress in a rapidly growing area of research. Grant applications often precede publications and some grants do not result in publications, thus, this system provides utility to investigate an earlier and broader view on research activity in many research disciplines. This project is a first attempt to provide a centralized database system for research grants and to categorize aging research projects into multiple subcategories utilizing both advanced machine algorithms and a hierarchical environment for scientific collaboration.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-29
... Federal Acquisition Regulation; Updates to Contract Reporting and Central Contractor Registration AGENCIES... Procurement Data System (FPDS). Additionally, changes are proposed for the clauses requiring contractor registration in the Central Contractor Registration (CCR) database and DUNS number reporting. DATES: Interested...
CNS sites cooperate to detect duplicate subjects with a clinical trial subject registry.
Shiovitz, Thomas M; Wilcox, Charles S; Gevorgyan, Lilit; Shawkat, Adnan
2013-02-01
To report the results of the first 1,132 subjects in a pilot project where local central nervous system trial sites collaborated in the use of a subject database to identify potential duplicate subjects. Central nervous system sites in Los Angeles and Orange County, California, were contacted by the lead author to seek participation in the project. CTSdatabase, a central nervous system-focused trial subject registry, was utilized to track potential subjects at pre-screen. Subjects signed an institutional review board-approved authorization prior to participation, and site staff entered their identifiers by accessing a website. Sites were prompted to communicate with each other or with the database administrator when a match occurred between a newly entered subject and a subject already in the database. Between October 30, 2011, and August 31, 2012, 1,132 subjects were entered at nine central nervous system sites. Subjects continue to be entered, and more sites are anticipated to begin participation by the time of publication. Initially, there were concerns at a few sites over patient acceptance, financial implications, and/or legal and privacy issues, but these were eventually overcome. Patient acceptance was estimated to be above 95 percent. Duplicate Subjects (those that matched several key identifiers with subjects at different sites) made up 7.78 percent of the sample and Certain Duplicates (matching identifiers with a greater than 1 in 10 million likelihood of occurring by chance in the general population) accounted for 3.45 percent of pre-screens entered into the database. Many of these certain duplicates were not consented for studies because of the information provided by the registry. The use of a clinical trial subject registry and cooperation between central nervous system trial sites can reduce the number of duplicate and professional subjects entering clinical trials. To be fully effective, a trial subject database could be integrated into protocols across pharmaceutical companies, thereby mandating site participation and increasing the likelihood that duplicate subjects will be removed before they enter (and negatively affect) clinical trials.
Interactive access to forest inventory data for the South Central United States
William H. McWilliams
1990-01-01
On-line access to USDA, Forest Service successive forest inventory data for the South Central United States is provided by two computer systems. The Easy Access to Forest Inventory and Analysis Tables program (EZTAB) produces a set of tables for specific geographic areas. The Interactive Graphics and Retrieval System (INGRES) is a database management system that...
Plant Genome Resources at the National Center for Biotechnology Information
Wheeler, David L.; Smith-White, Brian; Chetvernin, Vyacheslav; Resenchuk, Sergei; Dombrowski, Susan M.; Pechous, Steven W.; Tatusova, Tatiana; Ostell, James
2005-01-01
The National Center for Biotechnology Information (NCBI) integrates data from more than 20 biological databases through a flexible search and retrieval system called Entrez. A core Entrez database, Entrez Nucleotide, includes GenBank and is tightly linked to the NCBI Taxonomy database, the Entrez Protein database, and the scientific literature in PubMed. A suite of more specialized databases for genomes, genes, gene families, gene expression, gene variation, and protein domains dovetails with the core databases to make Entrez a powerful system for genomic research. Linked to the full range of Entrez databases is the NCBI Map Viewer, which displays aligned genetic, physical, and sequence maps for eukaryotic genomes including those of many plants. A specialized plant query page allow maps from all plant genomes covered by the Map Viewer to be searched in tandem to produce a display of aligned maps from several species. PlantBLAST searches against the sequences shown in the Map Viewer allow BLAST alignments to be viewed within a genomic context. In addition, precomputed sequence similarities, such as those for proteins offered by BLAST Link, enable fluid navigation from unannotated to annotated sequences, quickening the pace of discovery. NCBI Web pages for plants, such as Plant Genome Central, complete the system by providing centralized access to NCBI's genomic resources as well as links to organism-specific Web pages beyond NCBI. PMID:16010002
Space Station Freedom environmental database system (FEDS) for MSFC testing
NASA Technical Reports Server (NTRS)
Story, Gail S.; Williams, Wendy; Chiu, Charles
1991-01-01
The Water Recovery Test (WRT) at Marshall Space Flight Center (MSFC) is the first demonstration of integrated water recovery systems for potable and hygiene water reuse as envisioned for Space Station Freedom (SSF). In order to satisfy the safety and health requirements placed on the SSF program and facilitate test data assessment, an extensive laboratory analysis database was established to provide a central archive and data retrieval function. The database is required to store analysis results for physical, chemical, and microbial parameters measured from water, air and surface samples collected at various locations throughout the test facility. The Oracle Relational Database Management System (RDBMS) was utilized to implement a secured on-line information system with the ECLSS WRT program as the foundation for this system. The database is supported on a VAX/VMS 8810 series mainframe and is accessible from the Marshall Information Network System (MINS). This paper summarizes the database requirements, system design, interfaces, and future enhancements.
Legal Medicine Information System using CDISC ODM.
Kiuchi, Takahiro; Yoshida, Ken-ichi; Kotani, Hirokazu; Tamaki, Keiji; Nagai, Hisashi; Harada, Kazuki; Ishikawa, Hirono
2013-11-01
We have developed a new database system for forensic autopsies, called the Legal Medicine Information System, using the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM). This system comprises two subsystems, namely the Institutional Database System (IDS) located in each institute and containing personal information, and the Central Anonymous Database System (CADS) located in the University Hospital Medical Information Network Center containing only anonymous information. CDISC ODM is used as the data transfer protocol between the two subsystems. Using the IDS, forensic pathologists and other staff can register and search for institutional autopsy information, print death certificates, and extract data for statistical analysis. They can also submit anonymous autopsy information to the CADS semi-automatically. This reduces the burden of double data entry, the time-lag of central data collection, and anxiety regarding legal and ethical issues. Using the CADS, various studies on the causes of death can be conducted quickly and easily, and the results can be used to prevent similar accidents, diseases, and abuse. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Exchange Data Communication System based on Centralized Database for the Meat Industry
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Taniguchi, Yoji; Terada, Shuji; Komoda, Norihisa
We propose applying the EDI system that is based on centralized database and supports conversion of code data to the meat industry. This system makes it possible to share exchange data on beef between enterprises from producers to retailers by using Web EDI technology. In order to efficiently convert code direct conversion of a sender's code to a receiver's code using a code map is used. This system that mounted this function has been implemented in September 2004. Twelve enterprises including retailers, and processing traders, and wholesalers were using the system as of June 2005. In this system, the number of code maps relevant to the introductory cost of the code conversion function was lower than the theoretical value and were close to the case that a standard code is mediated.
Choi, Tae-Young; Jun, Ji Hee; Lee, Myeong Soo
2018-03-01
Integrative medicine is claimed to improve symptoms of lupus nephritis. No systematic reviews have been performed for the application of integrative medicine for lupus nephritis on patients with systemic lupus erythematosus (SLE). Thus, this review will aim to evaluate the current evidence on the efficacy of integrative medicine for the management of lupus nephritis in patients with SLE. The following electronic databases will be searched for studies published from their dates of inception February 2018: Medline, EMBASE and the Cochrane Central Register of Controlled Trials (CENTRAL), as well as 6 Korean medical databases (Korea Med, the Oriental Medicine Advanced Search Integrated System [OASIS], DBpia, the Korean Medical Database [KM base], the Research Information Service System [RISS], and the Korean Studies Information Services System [KISS]), and 1 Chinese medical database (the China National Knowledge Infrastructure [CNKI]). Study selection, data extraction, and assessment will be performed independently by 2 researchers. The risk of bias (ROB) will be assessed using the Cochrane ROB tool. This systematic review will be published in a peer-reviewed journal and disseminated both electronically and in print. The review will be updated to inform and guide healthcare practice and policy. PROSPERO 2018 CRD42018085205.
Pape-Haugaard, Louise; Frank, Lars
2011-01-01
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)
NASA Technical Reports Server (NTRS)
Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.
2005-01-01
The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.
Active in-database processing to support ambient assisted living systems.
de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas
2014-08-12
As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.
Active In-Database Processing to Support Ambient Assisted Living Systems
de Morais, Wagner O.; Lundström, Jens; Wickström, Nicholas
2014-01-01
As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare. PMID:25120164
Assessing animal welfare in sow herds using data on meat inspection, medication and mortality.
Knage-Rasmussen, K M; Rousing, T; Sørensen, J T; Houe, H
2015-03-01
This paper aims to contribute to the development of a cost-effective alternative to expensive on-farm animal-based welfare assessment systems. The objective of the study was to design an animal welfare index based on central database information (DBWI), and to validate it against an animal welfare index based on-farm animal-based measurements (AWI). Data on 63 Danish sow herds with herd-sizes of 80 to 2500 sows and an average herd size of 501 were collected from three central databases containing: Meat inspection data collected at animal level in the abattoir, mortality data at herd level from the rendering plants of DAKA, and medicine records at both herd and animal group level (sow with piglets, weaners or finishers) from the central database Vetstat. Selected measurements taken from these central databases were used to construct the DBWI. The relative welfare impacts of both individual database measurements and the databases overall were assigned in consultation with a panel consisting of 12 experts. The experts were drawn from production advisory activities, animal science and in one case an animal welfare organization. The expert panel weighted each measurement on a scale from 1 (not-important) to 5 (very important). The experts also gave opinions on the relative weightings of measurements for each of the three databases by stating a relative weight of each database in the DBWI. On the basis of this, the aggregated DBWI was normalized. The aggregation of AWI was based on weighted summary of herd prevalence's of 20 clinical and behavioural measurements originating from a 1 day data collection. AWI did not show linear dependency of DBWI. This suggests that DBWI is not suited to replace an animal welfare index using on-farm animal-based measurements.
Astaras, Alexander; Arvanitidou, Marina; Chouvarda, Ioanna; Kilintzis, Vassilis; Koutkias, Vassilis; Sanchez, Eduardo Monton; Stalidis, George; Triantafyllidis, Andreas; Maglaveras, Nicos
2008-01-01
A flexible, scaleable and cost-effective medical telemetry system is described for monitoring sleep-related disorders in the home environment. The system was designed and built for real-time data acquisition and processing, allowing for additional use in intensive care unit scenarios where rapid medical response is required in case of emergency. It comprises a wearable body area network of Zigbee-compatible wireless sensors worn by the subject, a central database repository residing in the medical centre and thin client workstations located at the subject's home and in the clinician's office. The system supports heterogeneous setup configurations, involving a variety of data acquisition sensors to suit several medical applications. All telemetry data is securely transferred and stored in the central database under the clinicians' ownership and control.
Integrative medicine for managing the symptoms of lupus nephritis
Choi, Tae-Young; Jun, Ji Hee; Lee, Myeong Soo
2018-01-01
Abstract Background: Integrative medicine is claimed to improve symptoms of lupus nephritis. No systematic reviews have been performed for the application of integrative medicine for lupus nephritis on patients with systemic lupus erythematosus (SLE). Thus, this review will aim to evaluate the current evidence on the efficacy of integrative medicine for the management of lupus nephritis in patients with SLE. Methods and analyses: The following electronic databases will be searched for studies published from their dates of inception February 2018: Medline, EMBASE and the Cochrane Central Register of Controlled Trials (CENTRAL), as well as 6 Korean medical databases (Korea Med, the Oriental Medicine Advanced Search Integrated System [OASIS], DBpia, the Korean Medical Database [KM base], the Research Information Service System [RISS], and the Korean Studies Information Services System [KISS]), and 1 Chinese medical database (the China National Knowledge Infrastructure [CNKI]). Study selection, data extraction, and assessment will be performed independently by 2 researchers. The risk of bias (ROB) will be assessed using the Cochrane ROB tool. Dissemination: This systematic review will be published in a peer-reviewed journal and disseminated both electronically and in print. The review will be updated to inform and guide healthcare practice and policy. Trial registration number: PROSPERO 2018 CRD42018085205 PMID:29595669
A Data Analysis Expert System For Large Established Distributed Databases
NASA Astrophysics Data System (ADS)
Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick
1987-05-01
The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.
The UNIX/XENIX Advantage: Applications in Libraries.
ERIC Educational Resources Information Center
Gordon, Kelly L.
1988-01-01
Discusses the application of the UNIX/XENIX operating system to support administrative office automation functions--word processing, spreadsheets, database management systems, electronic mail, and communications--at the Central Michigan University Libraries. Advantages and disadvantages of the XENIX operating system and system configuration are…
Creation of the NaSCoRD Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William
This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less
NASA Astrophysics Data System (ADS)
Waki, Masaki; Uruno, Shigenori; Ohashi, Hiroyuki; Manabe, Tetsuya; Azuma, Yuji
We propose an optical fiber connection navigation system that uses visible light communication for an integrated distribution module in a central office. The system realizes an accurate database, requires less skilled work to operate and eliminates human error. This system can achieve a working time reduction of up to 88.0% compared with the conventional work without human error for the connection/removal of optical fiber cords, and is economical as regards installation and operation.
Kim, Chang-Gon; Mun, Su-Jeong; Kim, Ka-Na; Shin, Byung-Cheul; Kim, Nam-Kwen; Lee, Dong-Hyo; Lee, Jung-Han
2016-05-13
Manual therapy is the non-surgical conservative management of musculoskeletal disorders using the practitioner's hands on the patient's body for diagnosing and treating disease. The aim of this study is to systematically review trial-based economic evaluations of manual therapy relative to other interventions used for the management of musculoskeletal diseases. Randomised clinical trials (RCTs) on the economic evaluation of manual therapy for musculoskeletal diseases will be included in the review. The following databases will be searched from their inception: Medline, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Econlit, Mantis, Index to Chiropractic Literature, Science Citation Index, Social Science Citation Index, Allied and Complementary Medicine Database (AMED), Cochrane Database of Systematic Reviews (CDSR), National Health Service Database of Abstracts of Reviews of Effects (NHS DARE), National Health Service Health Technology Assessment Database (NHS HTA), National Health Service Economic Evaluation Database (NHS EED), CENTRAL, five Korean medical databases (Oriental Medicine Advanced Searching Integrated System (OASIS), Research Information Service System (RISS), DBPIA, Korean Traditional Knowledge Portal (KTKP) and KoreaMed) and three Chinese databases (China National Knowledge Infrastructure (CNKI), VIP and Wanfang). The evidence for the cost-effectiveness, cost-utility and cost-benefit of manual therapy for musculoskeletal diseases will be assessed as the primary outcome. Health-related quality of life and adverse effects will be assessed as secondary outcomes. We will critically appraise the included studies using the Cochrane risk of bias tool and the Drummond checklist. Results will be summarised using Slavin's qualitative best-evidence synthesis approach. The results of the study will be disseminated via a peer-reviewed journal and/or conference presentations. PROSPERO CRD42015026757. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
[Plug-in Based Centralized Control System in Operating Rooms].
Wang, Yunlong
2017-05-30
Centralized equipment controls in an operating room (OR) is crucial to an efficient workflow in the OR. To achieve centralized control, an integrative OR needs to focus on designing a control panel that can appropriately incorporate equipment from different manufactures with various connecting ports and controls. Here we propose to achieve equipment integration using plug-in modules. Each OR will be equipped with a dynamic plug-in control panel containing physically removable connecting ports. Matching outlets will be installed onto the control panels of each equipment used at any given time. This dynamic control panel will be backed with a database containing plug-in modules that can connect any two types of connecting ports common among medical equipment manufacturers. The correct connecting ports will be called using reflection dynamics. This database will be updated regularly to include new connecting ports on the market, making it easy to maintain, update, expand and remain relevant as new equipment are developed. Together, the physical panel and the database will achieve centralized equipment controls in the OR that can be easily adapted to any equipment in the OR.
Worl, R.G.; Johnson, K.M.
1995-01-01
The paper version of Map Showing Geologic Terranes of the Hailey 1x2 Quadrangle and the western part of the Idaho Falls 1x2 Quadrangle, south-central Idaho was compiled by Ron Worl and Kate Johnson in 1995. The plate was compiled on a 1:250,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a geographic information system database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.
Library Automation in the Netherlands and Pica.
ERIC Educational Resources Information Center
Bossers, Anton; Van Muyen, Martin
1984-01-01
Describes the Pica Library Automation Network (originally the Project for Integrated Catalogue Automation), which is based on a centralized bibliographic database. Highlights include the Pica conception of library automation, online shared cataloging system, circulation control system, acquisition system, and online Dutch union catalog with…
NASA Technical Reports Server (NTRS)
Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann
2008-01-01
Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.
Database on Demand: insight how to build your own DBaaS
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio
2015-12-01
At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.
Besstrashnova, Yanina K; Shoshmin, Alexander V; Nosov, Valeriy A
2012-01-01
In December 2011, the first phase of the project aimed at developing an information system for the implementation of individual rehabilitation programs for persons with disabilities was finished in Nizhny Novgorod region of Russia. It included the installation of 40 workstations in the Ministry for Social Policy and 8 institutions of Nizhny Novgorod region. Accumulated data were moved to a new information system based on a distributed database. In 2012, the rest of the regional rehabilitation institutions are to join this information system. A transition to a centralized database is planned.
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
MST radar data-base management
NASA Technical Reports Server (NTRS)
Wickwar, V. B.
1983-01-01
Data management for Mesospheric-Stratospheric-Tropospheric, (MST) radars is addressed. An incoherent-scatter radar data base is discussed in terms of purpose, centralization, scope, and nature of the data base management system.
The representation of manipulable solid objects in a relational database
NASA Technical Reports Server (NTRS)
Bahler, D.
1984-01-01
This project is concerned with the interface between database management and solid geometric modeling. The desirability of integrating computer-aided design, manufacture, testing, and management into a coherent system is by now well recognized. One proposed configuration for such a system uses a relational database management system as the central focus; the various other functions are linked through their use of a common data repesentation in the data manager, rather than communicating pairwise to integrate a geometric modeling capability with a generic relational data managemet system in such a way that well-formed questions can be posed and answered about the performance of the system as a whole. One necessary feature of any such system is simplification for purposes of anaysis; this and system performance considerations meant that a paramount goal therefore was that of unity and simplicity of the data structures used.
Security in the CernVM File System and the Frontier Distributed Database Caching System
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.
2014-06-01
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.
SSCR Automated Manager (SAM) release 1. 1 reference manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1988-10-01
This manual provides instructions for using the SSCR Automated Manager (SAM) to manage System Software Change Records (SSCRs) online. SSCRs are forms required to document all system software changes for the Martin Marietta Energy Systems, Inc., Central computer systems. SAM, a program developed at Energy Systems, is accessed through IDMS/R (Integrated Database Management System) on an IBM system.
Development of a forestry government agency enterprise GIS system: a disconnected editing approach
NASA Astrophysics Data System (ADS)
Zhu, Jin; Barber, Brad L.
2008-10-01
The Texas Forest Service (TFS) has developed a geographic information system (GIS) for use by agency personnel in central Texas for managing oak wilt suppression and other landowner assistance programs. This Enterprise GIS system was designed to support multiple concurrent users accessing shared information resources. The disconnected editing approach was adopted in this system to avoid the overhead of maintaining an active connection between TFS central Texas field offices and headquarters since most field offices are operating with commercially provided Internet service. The GIS system entails maintaining a personal geodatabase on each local field office computer. Spatial data from the field is periodically up-loaded into a central master geodatabase stored in a Microsoft SQL Server at the TFS headquarters in College Station through the ESRI Spatial Database Engine (SDE). This GIS allows users to work off-line when editing data and requires connecting to the central geodatabase only when needed.
The Facility Registry System (FRS) is a centrally managed database that identifies facilities, sites or places subject to environmental regulations or of environmental interest. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous...
A perioperative echocardiographic reporting and recording system.
Pybus, David A
2004-11-01
Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.
How to maintain blood supply during computer network breakdown: a manual backup system.
Zeiler, T; Slonka, J; Bürgi, H R; Kretschmer, V
2000-12-01
Electronic data management systems using computer network systems and client/server architecture are increasingly used in laboratories and transfusion services. Severe problems arise if there is no network access to the database server and critical functions are not available. We describe a manual backup system (MBS) developed to maintain the delivery of blood products to patients in a hospital transfusion service in case of a computer network breakdown. All data are kept on a central SQL database connected to peripheral workstations in a local area network (LAN). Request entry from wards is performed via machine-readable request forms containing self-adhesive specimen labels with barcodes for test tubes. Data entry occurs on-line by bidirectional automated systems or off-line manually. One of the workstations in the laboratory contains a second SQL database which is frequently and incrementally updated. This workstation is run as a stand-alone, read-only database if the central SQL database is not available. In case of a network breakdown, the time-graded MBS is launched. Patient data, requesting ward and ordered tests/requests, are photocopied through a template from the request forms on special MBS worksheets serving as laboratory journal for manual processing and result report (a copy is left in the laboratory). As soon as the network is running again the data from the off-line period are entered into the primary SQL server. The MBS was successfully used at several occasions. The documentation of a 90-min breakdown period is presented in detail. Additional work resulted from the copy work and the belated manual data entry after restoration of the system. There was no delay in issue of blood products or result reporting. The backup system described has been proven to be simple, quick and safe to maintain urgent blood supply and distribution of laboratory results in case of unexpected network breakdown.
Using database reports to reduce workplace violence: Perceptions of hospital stakeholders
Arnetz, Judith E.; Hamblin, Lydia; Ager, Joel; Aranyos, Deanna; Essenmacher, Lynnette; Upfal, Mark J.; Luborsky, Mark
2016-01-01
BACKGROUND Documented incidents of violence provide the foundation for any workplace violence prevention program. However, no published research to date has examined stakeholders’ preferences for workplace violence data reports in healthcare settings. If relevant data are not readily available and effectively summarized and presented, the likelihood is low that they will be utilized by stakeholders in targeted efforts to reduce violence. OBJECTIVE To discover and describe hospital system stakeholders’ perceptions of database-generated workplace violence data reports. PARTICIPANTS Eight hospital system stakeholders representing Human Resources, Security, Occupational Health Services, Quality and Safety, and Labor in a large, metropolitan hospital system. METHODS The hospital system utilizes a central database for reporting adverse workplace events, including incidents of violence. A focus group was conducted to identify stakeholders’ preferences and specifications for standardized, computerized reports of workplace violence data to be generated by the central database. The discussion was audio-taped, transcribed verbatim, processed as text, and analyzed using stepwise content analysis. RESULTS Five distinct themes emerged from participant responses: Concerns, Etiology, Customization, Use, and Outcomes. In general, stakeholders wanted data reports to provide “the big picture,” i.e., rates of occurrence; reasons for and details regarding incident occurrence; consequences for the individual employee and/or the workplace; and organizational efforts that were employed to deal with the incident. CONCLUSIONS Exploring stakeholder views regarding workplace violence summary reports provided concrete information on the preferred content, format, and use of workplace violence data. Participants desired both epidemiological and incident-specific data in order to better understand and work to prevent the workplace violence occurring in their hospital system. PMID:25059315
Using database reports to reduce workplace violence: Perceptions of hospital stakeholders.
Arnetz, Judith E; Hamblin, Lydia; Ager, Joel; Aranyos, Deanna; Essenmacher, Lynnette; Upfal, Mark J; Luborsky, Mark
2015-01-01
Documented incidents of violence provide the foundation for any workplace violence prevention program. However, no published research to date has examined stakeholders' preferences for workplace violence data reports in healthcare settings. If relevant data are not readily available and effectively summarized and presented, the likelihood is low that they will be utilized by stakeholders in targeted efforts to reduce violence. To discover and describe hospital system stakeholders' perceptions of database-generated workplace violence data reports. Eight hospital system stakeholders representing Human Resources, Security, Occupational Health Services, Quality and Safety, and Labor in a large, metropolitan hospital system. The hospital system utilizes a central database for reporting adverse workplace events, including incidents of violence. A focus group was conducted to identify stakeholders' preferences and specifications for standardized, computerized reports of workplace violence data to be generated by the central database. The discussion was audio-taped, transcribed verbatim, processed as text, and analyzed using stepwise content analysis. Five distinct themes emerged from participant responses: Concerns, Etiology, Customization, Use, and Outcomes. In general, stakeholders wanted data reports to provide ``the big picture,'' i.e., rates of occurrence; reasons for and details regarding incident occurrence; consequences for the individual employee and/or the workplace; and organizational efforts that were employed to deal with the incident. Exploring stakeholder views regarding workplace violence summary reports provided concrete information on the preferred content, format, and use of workplace violence data. Participants desired both epidemiological and incident-specific data in order to better understand and work to prevent the workplace violence occurring in their hospital system.
Operational Experience with the Frontier System in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter
2012-06-20
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less
Operational Experience with the Frontier System in CMS
NASA Astrophysics Data System (ADS)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen
2012-12-01
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.
Moseley, Anne M; Sherrington, Catherine; Elkins, Mark R; Herbert, Robert D; Maher, Christopher G
2009-09-01
To compare the comprehensiveness of indexing the reports of randomised controlled trials of physiotherapy interventions by eight bibliographic databases (AMED, CENTRAL, CINAHL, EMBASE, Hooked on Evidence, PEDro, PsycINFO and PubMed). Audit of bibliographic databases. Two hundred and eighty-one reports of randomised controlled trials of physiotherapy interventions were identified by screening the reference lists of 30 relevant systematic reviews published in four consecutive issues of the Cochrane Database of Systematic Reviews (Issue 3, 2007 to Issue 2, 2008). AMED, CENTRAL, CINAHL, EMBASE, Hooked on Evidence, PEDro, PsycINFO and PubMed were used to search for the trial reports. The number of trial reports indexed in each database was calculated. PEDro indexed 99% of the trial reports, CENTRAL indexed 98%, PubMed indexed 91%, EMBASE indexed 82%, CINAHL indexed 61%, Hooked on Evidence indexed 40%, AMED indexed 36% and PsycINFO indexed 17%. Most trial reports (92%) were indexed on four or more of the databases. One trial report was indexed on a single database (PEDro). Of the eight bibliographic databases examined, PEDro and CENTRAL provide the most comprehensive indexing of reports of randomised trials of physiotherapy interventions.
Assessment & Commitment Tracking System (ACTS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryant, Robert A.; Childs, Teresa A.; Miller, Michael A.
2004-12-20
The ACTS computer code provides a centralized tool for planning and scheduling assessments, tracking and managing actions associated with assessments or that result from an event or condition, and "mining" data for reporting and analyzing information for improving performance. The ACTS application is designed to work with the MS SQL database management system. All database interfaces are written in SQL. The following software is used to develop and support the ACTS application: Cold Fusion HTML JavaScript Quest TOAD Microsoft Visual Source Safe (VSS) HTML Mailer for sending email Microsoft SQL Microsoft Internet Information Server
Granitto, Matthew; DeWitt, Ed H.; Klein, Terry L.
2010-01-01
This database was initiated, designed, and populated to collect and integrate geochemical data from central Colorado in order to facilitate geologic mapping, petrologic studies, mineral resource assessment, definition of geochemical baseline values and statistics, environmental impact assessment, and medical geology. The Microsoft Access database serves as a geochemical data warehouse in support of the Central Colorado Assessment Project (CCAP) and contains data tables describing historical and new quantitative and qualitative geochemical analyses determined by 70 analytical laboratory and field methods for 47,478 rock, sediment, soil, and heavy-mineral concentrate samples. Most samples were collected by U.S. Geological Survey (USGS) personnel and analyzed either in the analytical laboratories of the USGS or by contract with commercial analytical laboratories. These data represent analyses of samples collected as part of various USGS programs and projects. In addition, geochemical data from 7,470 sediment and soil samples collected and analyzed under the Atomic Energy Commission National Uranium Resource Evaluation (NURE) Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program (henceforth called NURE) have been included in this database. In addition to data from 2,377 samples collected and analyzed under CCAP, this dataset includes archived geochemical data originally entered into the in-house Rock Analysis Storage System (RASS) database (used by the USGS from the mid-1960s through the late 1980s) and the in-house PLUTO database (used by the USGS from the mid-1970s through the mid-1990s). All of these data are maintained in the Oracle-based National Geochemical Database (NGDB). Retrievals from the NGDB and from the NURE database were used to generate most of this dataset. In addition, USGS data that have been excluded previously from the NGDB because the data predate earliest USGS geochemical databases, or were once excluded for programmatic reasons, have been included in the CCAP Geochemical Database and are planned to be added to the NGDB.
Central diabetes insipidus: a previously unreported side effect of temozolomide.
Faje, Alexander T; Nachtigall, Lisa; Wexler, Deborah; Miller, Karen K; Klibanski, Anne; Makimura, Hideo
2013-10-01
Temozolomide (TMZ) is an alkylating agent primarily used to treat tumors of the central nervous system. We describe 2 patients with apparent TMZ-induced central diabetes insipidus. Using our institution's Research Patient Database Registry, we identified 3 additional potential cases of TMZ-induced diabetes insipidus among a group of 1545 patients treated with TMZ. A 53-year-old male with an oligoastrocytoma and a 38-year-old male with an oligodendroglioma each developed symptoms of polydipsia and polyuria approximately 2 months after the initiation of TMZ. Laboratory analyses demonstrated hypernatremia and urinary concentrating defects, consistent with the presence of diabetes insipidus, and the patients were successfully treated with desmopressin acetate. Desmopressin acetate was withdrawn after the discontinuation of TMZ, and diabetes insipidus did not recur. Magnetic resonance imaging of the pituitary and hypothalamus was unremarkable apart from the absence of a posterior pituitary bright spot in both of the cases. Anterior pituitary function tests were normal in both cases. Using the Research Patient Database Registry database, we identified the 2 index cases and 3 additional potential cases of diabetes insipidus for an estimated prevalence of 0.3% (5 cases of diabetes insipidus per 1545 patients prescribed TMZ). Central diabetes insipidus is a rare but reversible side effect of treatment with TMZ.
Central Diabetes Insipidus: A Previously Unreported Side Effect of Temozolomide
Nachtigall, Lisa; Wexler, Deborah; Miller, Karen K.; Klibanski, Anne; Makimura, Hideo
2013-01-01
Context: Temozolomide (TMZ) is an alkylating agent primarily used to treat tumors of the central nervous system. We describe 2 patients with apparent TMZ-induced central diabetes insipidus. Using our institution's Research Patient Database Registry, we identified 3 additional potential cases of TMZ-induced diabetes insipidus among a group of 1545 patients treated with TMZ. Case Presentations: A 53-year-old male with an oligoastrocytoma and a 38-year-old male with an oligodendroglioma each developed symptoms of polydipsia and polyuria approximately 2 months after the initiation of TMZ. Laboratory analyses demonstrated hypernatremia and urinary concentrating defects, consistent with the presence of diabetes insipidus, and the patients were successfully treated with desmopressin acetate. Desmopressin acetate was withdrawn after the discontinuation of TMZ, and diabetes insipidus did not recur. Magnetic resonance imaging of the pituitary and hypothalamus was unremarkable apart from the absence of a posterior pituitary bright spot in both of the cases. Anterior pituitary function tests were normal in both cases. Using the Research Patient Database Registry database, we identified the 2 index cases and 3 additional potential cases of diabetes insipidus for an estimated prevalence of 0.3% (5 cases of diabetes insipidus per 1545 patients prescribed TMZ). Conclusions: Central diabetes insipidus is a rare but reversible side effect of treatment with TMZ. PMID:23928668
The Cronus Distributed DBMS (Database Management System) Project
1989-10-01
projects, e.g., HiPAC [Dayal 88] and Postgres [Stonebraker 86]. Although we expect to use these techniques, they have been developed for centralized...Computing Systems, June 1989. (To appear). [Stonebraker 86] Stonebraker, M. and Rowe, L. A., "The Design of POSTGRES ," Proceedings ACM SIGMOD Annual
Michaleff, Zoe A; Costa, Leonardo O P; Moseley, Anne M; Maher, Christopher G; Elkins, Mark R; Herbert, Robert D; Sherrington, Catherine
2011-02-01
Many bibliographic databases index research studies evaluating the effects of health care interventions. One study has concluded that the Physiotherapy Evidence Database (PEDro) has the most complete indexing of reports of randomized controlled trials of physical therapy interventions, but the design of that study may have exaggerated estimates of the completeness of indexing by PEDro. The purpose of this study was to compare the completeness of indexing of reports of randomized controlled trials of physical therapy interventions by 8 bibliographic databases. This study was an audit of bibliographic databases. Prespecified criteria were used to identify 400 reports of randomized controlled trials from the reference lists of systematic reviews published in 2008 that evaluated physical therapy interventions. Eight databases (AMED, CENTRAL, CINAHL, EMBASE, Hooked on Evidence, PEDro, PsycINFO, and PubMed) were searched for each trial report. The proportion of the 400 trial reports indexed by each database was calculated. The proportions of the 400 trial reports indexed by the databases were as follows: CENTRAL, 95%; PEDro, 92%; PubMed, 89%; EMBASE, 88%; CINAHL, 53%; AMED, 50%; Hooked on Evidence, 45%; and PsycINFO, 6%. Almost all of the trial reports (99%) were found in at least 1 database, and 88% were indexed by 4 or more databases. Four trial reports were uniquely indexed by a single database only (2 in CENTRAL and 1 each in PEDro and PubMed). The results are only applicable to searching for English-language published reports of randomized controlled trials evaluating physical therapy interventions. The 4 most comprehensive databases of trial reports evaluating physical therapy interventions were CENTRAL, PEDro, PubMed, and EMBASE. Clinicians seeking quick answers to clinical questions could search any of these databases knowing that all are reasonably comprehensive. PEDro, unlike the other 3 most complete databases, is specific to physical therapy, so studies not relevant to physical therapy are less likely to be retrieved. Researchers could use CENTRAL, PEDro, PubMed, and EMBASE in combination to conduct exhaustive searches for randomized trials in physical therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.
2004-05-12
An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view,more » create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.« less
Schiotis, Ruxandra; Font, Pilar; Zarco, Pedro; Almodovar, Raquel; Gratacós, Jordi; Mulero, Juan; Juanola, Xavier; Montilla, Carlos; Moreno, Estefanía; Ariza Ariza, Rafael; Collantes-Estevez, Eduardo
2011-01-01
Objective. To present the usefulness of a centralized system of data collection for the development of an international multicentre registry of SpA. Method. The originality of this registry consists in the creation of a virtual network of researchers in a computerized Internet database. From its conception, the registry was meant to be a dynamic acquiring system. Results. REGISPONSER has two developing phases (Conception and Universalization) and gathers several evolving secondary projects (REGISPONSER-EARLY, REGISPONSER-AS, ESPERANZA and RESPONDIA). Each sub-project answered the necessity of having more specific and complete data of the patients even from the onset of the disease so, in the end, obtaining a well-defined picture of SpAs spectrum in the Spanish population. Conclusion. REGISPONSER is the first dynamic SpA database composed of cohorts with a significant number of patients distributed by specific diagnosis, which provides basic specific information of the sub-cohorts useful for patients’ evaluation in rheumatology ambulatory consulting. PMID:20823095
Vasculitis Syndromes of the Central and Peripheral Nervous Systems
... VCRC, www.rarediseasesnetwork.org/vcrc/ ), a network of academic medical centers, patient support organizations, and clinical research ... NIH RePORTER ( http://projectreporter.nih.gov ), a searchable database of current and past research projects supported by ...
A distributed database view of network tracking systems
NASA Astrophysics Data System (ADS)
Yosinski, Jason; Paffenroth, Randy
2008-04-01
In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.
ECLSS evolution: Advanced instrumentation interface requirements. Volume 3: Appendix C
NASA Technical Reports Server (NTRS)
1991-01-01
An Advanced ECLSS (Environmental Control and Life Support System) Technology Interfaces Database was developed primarily to provide ECLSS analysts with a centralized and portable source of ECLSS technologies interface requirements data. The database contains 20 technologies which were previously identified in the MDSSC ECLSS Technologies database. The primary interfaces of interest in this database are fluid, electrical, data/control interfaces, and resupply requirements. Each record contains fields describing the function and operation of the technology. Fields include: an interface diagram, description applicable design points and operating ranges, and an explaination of data, as required. A complete set of data was entered for six of the twenty components including Solid Amine Water Desorbed (SAWD), Thermoelectric Integrated Membrane Evaporation System (TIMES), Electrochemical Carbon Dioxide Concentrator (EDC), Solid Polymer Electrolysis (SPE), Static Feed Electrolysis (SFE), and BOSCH. Additional data was collected for Reverse Osmosis Water Reclaimation-Potable (ROWRP), Reverse Osmosis Water Reclaimation-Hygiene (ROWRH), Static Feed Solid Polymer Electrolyte (SFSPE), Trace Contaminant Control System (TCCS), and Multifiltration Water Reclamation - Hygiene (MFWRH). A summary of the database contents is presented in this report.
Critical care procedure logging using handheld computers
Carlos Martinez-Motta, J; Walker, Robin; Stewart, Thomas E; Granton, John; Abrahamson, Simon; Lapinsky, Stephen E
2004-01-01
Introduction We conducted this study to evaluate the feasibility of implementing an internet-linked handheld computer procedure logging system in a critical care training program. Methods Subspecialty trainees in the Interdepartmental Division of Critical Care at the University of Toronto received and were trained in the use of Palm handheld computers loaded with a customized program for logging critical care procedures. The procedures were entered into the handheld device using checkboxes and drop-down lists, and data were uploaded to a central database via the internet. To evaluate the feasibility of this system, we tracked the utilization of this data collection system. Benefits and disadvantages were assessed through surveys. Results All 11 trainees successfully uploaded data to the central database, but only six (55%) continued to upload data on a regular basis. The most common reason cited for not using the system pertained to initial technical problems with data uploading. From 1 July 2002 to 30 June 2003, a total of 914 procedures were logged. Significant variability was noted in the number of procedures logged by individual trainees (range 13–242). The database generated by regular users provided potentially useful information to the training program director regarding the scope and location of procedural training among the different rotations and hospitals. Conclusion A handheld computer procedure logging system can be effectively used in a critical care training program. However, user acceptance was not uniform, and continued training and support are required to increase user acceptance. Such a procedure database may provide valuable information that may be used to optimize trainees' educational experience and to document clinical training experience for licensing and accreditation. PMID:15469577
78 FR 55689 - Applications for Fiscal Year 2014 Awards; Impact Aid Section 8002 Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... System (DUNS) number and a Taxpayer Identification Number (TIN); b. Register both your DUNS number and TIN with the System for Award Management (SAM) (formerly the Central Contractor Registry (CCR)), the Government's primary registrant database; c. Provide your DUNS number and TIN on your application; and d...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-20
... System (DUNS) number and a Taxpayer Identification Number (TIN); b. Register both your DUNS number and TIN with the Central Contractor Registry (CCR)--and, after July 24, 2012, with the System for Award Management (SAM), the Government's primary registrant database; c. Provide your DUNS number and TIN on your...
Information-Sharing Application Standards for Integrated Government Systems
2010-12-01
23 4. Federated Search and Role-Based Data Access ................ 24 G. LESSONS FROM HSIN...4. Federated Search and Role-Based Data Access One of the original purposes of HSIN was to facilitate information sharing...recent search paradigm, Federated Search , allows separate systems to feed external data requests without the need for a huge centralized database
A web-based system architecture for ontology-based data integration in the domain of IT benchmarking
NASA Astrophysics Data System (ADS)
Pfaff, Matthias; Krcmar, Helmut
2018-03-01
In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.
Databases in the Central Government : State-of-the-art and the Future
NASA Astrophysics Data System (ADS)
Ohashi, Tomohiro
Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.
2005-09-01
e.g. the transformation of a fragment to an instructional fragment. "* IMAT Database: A Jasmine ® database is used as central database in IMAT for the...storage of fragments. This is an object-oriented relational database. Jasmine ® was, amongst other factors, chosen for its ability to handle multimedia...to the Jasmine ® database, which is used in IMAT as central database. 3.1.1.1 Ontologies In IMAT, the proposed solution on problems with information
NASA Astrophysics Data System (ADS)
Verdoodt, Ann; Baert, Geert; Van Ranst, Eric
2014-05-01
Central African soil resources are characterised by a large variability, ranging from stony, shallow or sandy soils with poor life-sustaining capabilities to highly weathered soils that recycle and support large amounts of biomass. Socio-economic drivers within this largely rural region foster inappropriate land use and management, threaten soil quality and finally culminate into a declining soil productivity and increasing food insecurity. For the development of sustainable land use strategies targeting development planning and natural hazard mitigation, decision makers often rely on legacy soil maps and soil profile databases. Recent development cooperation financed projects led to the design of soil information systems for Rwanda, D.R. Congo, and (ongoing) Burundi. A major challenge is to exploit these existing soil databases and convert them into soil inference systems through an optimal combination of digital soil mapping techniques, land evaluation tools, and biogeochemical models. This presentation aims at (1) highlighting some key characteristics of typical Central African soils, (2) assessing the positional, geographic and semantic quality of the soil information systems, and (3) revealing its potential impacts on the use of these datasets for thematic mapping of soil ecosystem services (e.g. organic carbon storage, pH buffering capacity). Soil map quality is assessed considering positional and semantic quality, as well as geographic completeness. Descriptive statistics, decision tree classification and linear regression techniques are used to mine the soil profile databases. Geo-matching as well as class-matching approaches are considered when developing thematic maps. Variability in inherent as well as dynamic soil properties within the soil taxonomic units is highlighted. It is hypothesized that within-unit variation in soil properties highly affects the use and interpretation of thematic maps for ecosystem services mapping. Results will mainly be based on analyses done in Rwanda, but can be complemented with ongoing research results or prospects for Burundi.
MouseNet database: digital management of a large-scale mutagenesis project.
Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M
2000-07-01
The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.
Solar Sail Propulsion Technology Readiness Level Database
NASA Technical Reports Server (NTRS)
Adams, Charles L.
2004-01-01
The NASA In-Space Propulsion Technology (ISPT) Projects Office has been sponsoring 2 solar sail system design and development hardware demonstration activities over the past 20 months. Able Engineering Company (AEC) of Goleta, CA is leading one team and L Garde, Inc. of Tustin, CA is leading the other team. Component, subsystem and system fabrication and testing has been completed successfully. The goal of these activities is to advance the technology readiness level (TRL) of solar sail propulsion from 3 towards 6 by 2006. These activities will culminate in the deployment and testing of 20-meter solar sail system ground demonstration hardware in the 30 meter diameter thermal-vacuum chamber at NASA Glenn Plum Brook in 2005. This paper will describe the features of a computer database system that documents the results of the solar sail development activities to-date. Illustrations of the hardware components and systems, test results, analytical models, relevant space environment definition and current TRL assessment, as stored and manipulated within the database are presented. This database could serve as a central repository for all data related to the advancement of solar sail technology sponsored by the ISPT, providing an up-to-date assessment of the TRL of this technology. Current plans are to eventually make the database available to the Solar Sail community through the Space Transportation Information Network (STIN).
Morrison, James; Kaufman, John
2016-12-01
Vascular access is invaluable in the treatment of hospitalized patients. Central venous catheters provide a durable and long-term solution while saving patients from repeated needle sticks for peripheral IVs and blood draws. The initial catheter placement procedure and long-term catheter usage place patients at risk for infection. The goal of this project was to develop a system to track and evaluate central line-associated blood stream infections related to interventional radiology placement of central venous catheters. A customized web-based clinical database was developed via open-source tools to provide a dashboard for data mining and analysis of the catheter placement and infection information. Preliminary results were gathered over a 4-month period confirming the utility of the system. The tools and methodology employed to develop the vascular access tracking system could be easily tailored to other clinical scenarios to assist in quality control and improvement programs.
Implementation of an Online Database for Chemical Propulsion Systems
NASA Technical Reports Server (NTRS)
David B. Owen, II; McRight, Patrick S.; Cardiff, Eric H.
2009-01-01
The Johns Hopkins University, Chemical Propulsion Information Analysis Center (CPIAC) has been working closely with NASA Goddard Space Flight Center (GSFC); NASA Marshall Space Flight Center (MSFC); the University of Alabama at Huntsville (UAH); The Johns Hopkins University, Applied Physics Laboratory (APL); and NASA Jet Propulsion Laboratory (JPL) to capture satellite and spacecraft propulsion system information for an online database tool. The Spacecraft Chemical Propulsion Database (SCPD) is a new online central repository containing general and detailed system and component information on a variety of spacecraft propulsion systems. This paper only uses data that have been approved for public release with unlimited distribution. The data, supporting documentation, and ability to produce reports on demand, enable a researcher using SCPD to compare spacecraft easily, generate information for trade studies and mass estimates, and learn from the experiences of others through what has already been done. This paper outlines the layout and advantages of SCPD, including a simple example application with a few chemical propulsion systems from various NASA spacecraft.
48 CFR 52.232-33 - Payment by Electronic Funds Transfer-Central Contractor Registration.
Code of Federal Regulations, 2010 CFR
2010-10-01
... contained in the Central Contractor Registration (CCR) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the CCR database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the CCR database is incorrect...
An image database management system for conducting CAD research
NASA Astrophysics Data System (ADS)
Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.
2007-03-01
The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.
Computerizing Maintenance Management Improves School Processes.
ERIC Educational Resources Information Center
Conroy, Pat
2002-01-01
Describes how a Computerized Maintenance Management System (CMMS), a centralized maintenance operations database that facilitates work order procedures and staff directives, can help individual school campuses and school districts to manage maintenance. Presents the benefits of CMMS and things to consider in CMMS selection. (EV)
Electronic Approval: Another Step toward a Paperless Office.
ERIC Educational Resources Information Center
Blythe, Kenneth C.; Morrison, Dennis L.
1992-01-01
Pennsylvania State University's award-winning electronic approval system allows administrative documents to be electronically generated, approved, and updated in the university's central database. Campus business can thus be conducted faster, less expensively, more accurately, and with greater security than with traditional paper approval…
RFID technologies for imported foods inspection
USDA-ARS?s Scientific Manuscript database
Food-borne illness typically occurs due to contamination of food products with Escherichia coli, Salmonella spp., Listeria monocytogenes and other pathogens. Unfortunately, it takes several weeks to identify the source of such contamination, possibly due to lack of a central database system that is ...
NASA Astrophysics Data System (ADS)
Chapman, James B.; Kapp, Paul
2017-11-01
A database containing previously published geochronologic, geochemical, and isotopic data on Mesozoic to Quaternary igneous rocks in the Himalayan-Tibetan orogenic system are presented. The database is intended to serve as a repository for new and existing igneous rock data and is publicly accessible through a web-based platform that includes an interactive map and data table interface with search, filtering, and download options. To illustrate the utility of the database, the age, location, and ɛHft composition of magmatism from the central Gangdese batholith in the southern Lhasa terrane are compared. The data identify three high-flux events, which peak at 93, 50, and 15 Ma. They are characterized by inboard arc migration and a temporal and spatial shift to more evolved isotopic compositions.
Maclean, Donald; Younes, Hakim Ben; Forrest, Margaret; Towers, Hazel K
2012-03-01
Accurate and timely clinical data are required for clinical and organisational purposes and is especially important for patient management, audit of surgical performance and the electronic health record. The recent introduction of computerised theatre management systems has enabled real-time (point-of-care) operative procedure coding by clinical staff. However the accuracy of these data is unknown. The aim of this Scottish study was to compare the accuracy of theatre nurses' real-time coding on the local theatre management system with the central Scottish Morbidity Record (SMR01). Paired procedural codes were recorded, qualitatively graded for precision and compared (n = 1038). In this study, real-time, point-of-care coding by theatre nurses resulted in significant coding errors compared with the central SMR01 database. Improved collaboration between full-time coders and clinical staff using computerised decision support systems is suggested.
Efficient Privacy-Enhancing Techniques for Medical Databases
NASA Astrophysics Data System (ADS)
Schartner, Peter; Schaffer, Martin
In this paper, we introduce an alternative for using linkable unique health identifiers: locally generated system-wide unique digital pseudonyms. The presented techniques are based on a novel technique called collision-free number generation which is discussed in the introductory part of the article. Afterwards, attention is payed onto two specific variants of collision-free number generation: one based on the RSA-Problem and the other one based on the Elliptic Curve Discrete Logarithm Problem. Finally, two applications are sketched: centralized medical records and anonymous medical databases.
Automatic Mexican sign language and digits recognition using normalized central moments
NASA Astrophysics Data System (ADS)
Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina
2016-09-01
This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.
The Microcomputer in the Administrative Office.
ERIC Educational Resources Information Center
Huntington, Fred
1983-01-01
Discusses microcomputer uses for administrative computing in education at site level and central office and recommends that administrators start with a word processing program for time management, an electronic spreadsheet for financial accounting, a database management system for inventories, and self-written programs to alleviate paper…
Shift-invariant discrete wavelet transform analysis for retinal image classification.
Khademi, April; Krishnan, Sridhar
2007-12-01
This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.
A Comparison of Different Database Technologies for the CMS AsyncStageOut Transfer Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciangottini, D.; Balcas, J.; Mascheroni, M.
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses amore » NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.« less
A comparison of different database technologies for the CMS AsyncStageOut transfer database
NASA Astrophysics Data System (ADS)
Ciangottini, D.; Balcas, J.; Mascheroni, M.; Rupeika, E. A.; Vaandering, E.; Riahi, H.; Silva, J. M. D.; Hernandez, J. M.; Belforte, S.; Ivanov, T. T.
2017-10-01
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.
ASEAN Mineral Database and Information System (AMDIS)
NASA Astrophysics Data System (ADS)
Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.
2014-12-01
AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.
Remote online monitoring and measuring system for civil engineering structures
NASA Astrophysics Data System (ADS)
Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł
2009-06-01
In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.
A strong-motion database from the Central American subduction zone
NASA Astrophysics Data System (ADS)
Arango, Maria Cristina; Strasser, Fleur O.; Bommer, Julian J.; Hernández, Douglas A.; Cepeda, Jose M.
2011-04-01
Subduction earthquakes along the Pacific Coast of Central America generate considerable seismic risk in the region. The quantification of the hazard due to these events requires the development of appropriate ground-motion prediction equations, for which purpose a database of recordings from subduction events in the region is indispensable. This paper describes the compilation of a comprehensive database of strong ground-motion recordings obtained during subduction-zone events in Central America, focusing on the region from 8 to 14° N and 83 to 92° W, including Guatemala, El Salvador, Nicaragua and Costa Rica. More than 400 accelerograms recorded by the networks operating across Central America during the last decades have been added to data collected by NORSAR in two regional projects for the reduction of natural disasters. The final database consists of 554 triaxial ground-motion recordings from events of moment magnitudes between 5.0 and 7.7, including 22 interface and 58 intraslab-type events for the time period 1976-2006. Although the database presented in this study is not sufficiently complete in terms of magnitude-distance distribution to serve as a basis for the derivation of predictive equations for interface and intraslab events in Central America, it considerably expands the Central American subduction data compiled in previous studies and used in early ground-motion modelling studies for subduction events in this region. Additionally, the compiled database will allow the assessment of the existing predictive models for subduction-type events in terms of their applicability for the Central American region, which is essential for an adequate estimation of the hazard due to subduction earthquakes in this region.
Magnetic resonance imaging characteristics in four dogs with central nervous system neosporosis.
Parzefall, Birgit; Driver, Colin J; Benigni, Livia; Davies, Emma
2014-01-01
Neosporosis is a polysystemic disease that can affect dogs of any age and can cause inflammation of the central nervous system. Antemortem diagnosis can be challenging, as clinical and conventional laboratory test findings are often nonspecific. A previous report described cerebellar lesions in brain MRI studies of seven dogs and proposed that these may be characteristic for central nervous system Neosporosis. The purpose of this retrospective study was to describe MRI characteristics in another group of dogs with confirmed central nervous system neosporosis and compare them with the previous report. The hospital's database was searched for dogs with confirmed central nervous system neosporosis and four observers recorded findings from each dog's MRI studies. A total of four dogs met inclusion criteria. Neurologic examination was indicative of a forebrain and cerebellar lesion in dog 2 and multifocal central nervous system disease in dogs 1, 3, and 4. Magnetic resonance imaging showed mild bilateral and symmetrical cerebellar atrophy in three of four dogs (dogs 2, 3, 4), intramedullary spinal cord changes in two dogs (dogs 3, 4) and a mesencephalic and metencephalic lesion in one dog (dog 2). Multifocal brain lesions were recognized in two dogs (dogs 1, 4) and were present in the thalamus, lentiform nucleus, centrum semiovale, internal capsule, brainstem and cortical gray matter of the frontal, parietal or temporal lobe. Findings indicated that central nervous system neosporosis may be characterized by multifocal MRI lesions as well as cerebellar involvement in dogs. © 2014 American College of Veterinary Radiology.
ERIC Educational Resources Information Center
Borgman, Christine L.
1996-01-01
Reports on a survey of 70 research libraries in Croatia, Czech Republic, Hungary, Poland, Slovakia, and Slovenia. Results show that libraries are rapidly acquiring automated processing systems, CD-ROM databases, and connections to computer networks. Discusses specific data on system implementation and network services by country and by type of…
A Comparison of Associate in Arts Transfer Rates between 1994-95 and 1998-99.
ERIC Educational Resources Information Center
Windham, Patricia
This study of the of the Florida Community College System compares Associate of Arts (AA) transfers over a five-year period, from 1994-95 to 1998-99. The study tracked transfers with Florida's centralized student database system, which uses social security numbers as student identifiers. It included only students who completed the AA degree, and…
The geo-control system for station keeping and colocation of geostationary satellites
NASA Technical Reports Server (NTRS)
Montenbruck, O.; Eckstein, M. C.; Gonner, J.
1993-01-01
GeoControl is a compact but powerful and accurate software system for station keeping of single and colocated satellites, which has been developed at the German Space Operations Center. It includes four core modules for orbit determination (including maneuver estimation), maneuver planning, monitoring of proximities between colocated satellites, and interference and event prediction. A simple database containing state vector and maneuver information at selected epochs is maintained as a central interface between the modules. A menu driven shell utilizing form screens for data input serves as the central user interface. The software is written in Ada and FORTRAN and may be used on VAX workstations or mainframes under the VMS operating system.
NASA Astrophysics Data System (ADS)
Lanckman, Jean-Pierre; Elger, Kirsten; Karlsson, Ævar Karl; Johannsson, Halldór; Lantuit, Hugues
2013-04-01
Permafrost is a direct indicator of climate change and has been identified as Essential Climate Variable (ECV) by the global observing community. The monitoring of permafrost temperatures, active-layer thicknesses and other parameters has been performed for several decades already, but it was brought together within the Global Terrestrial Network for Permafrost (GTN-P) in the 1990's only, including the development of measurement protocols to provide standardized data. GTN-P is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). All GTN-P data was outfitted with an "open data policy" with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: it is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been used by as many researchers as intended by the initiators of the programs. While the monitoring of many other ECVs has been tackled by organized international networks (e.g. FLUXNET), there is still no central database for all permafrost-related parameters. The European Union project PAGE21 created opportunities to develop this central database for permafrost monitoring parameters of GTN-P during the duration of the project and beyond. The database aims to be the one location where the researcher can find data, metadata, and information of all relevant parameters for a specific site. Each component of the Data Management System (DMS), including parameters, data levels and metadata formats were developed in cooperation with the GTN-P and the IPA. The general framework of the GTN-P DMS is based on an object oriented model (OOM), open for as many parameters as possible, and implemented into a spatial database. To ensure interoperability and enable potential inter-database search, field names are following international metadata standards and are based on a control vocabulary registry. Tools are developed to provide data processing, analysis capability, and quality control. Our system aims to be a reference model, improvable and reusable. It allows a maximum top-down and bottom-up data flow, giving scientists one global searchable data and metadata repository, the public a full access to scientific data, and the policy maker a powerful cartographic and statistical tool. To engage the international community in GTN-P, it was essential to develop an online interface for data upload. Aim for this was that it is easy-to-use and allows data input with a minimum of technical and personal effort. In addition to this, large efforts will have to be produced in order to be able to query, visualize and retrieve information over many platforms and type of measurements. Ultimately, it is not the layer in itself that matter, but more the relationship that these information layers maintain with each other.
Development of Electronic Resources across Networks in Thailand.
ERIC Educational Resources Information Center
Ratchatavorn, Phandao
2002-01-01
Discusses the development of electronic resources across library networks in Thailand to meet user needs, particularly electronic journals. Topics include concerns about journal access; limited budgets for library acquisitions of journals; and sharing resources through a centralized database system that allows Web access to journals via Internet…
Fish Karyome: A karyological information network database of Indian Fishes.
Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra
2012-01-01
'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.
Importance of Data Management in a Long-term Biological Monitoring Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Sigurd W; Brandt, Craig C; McCracken, Kitty
2011-01-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program. An existing relational database was adapted and extended to handle biological data. Data modeling enabled the program's database to process, store, and retrievemore » its data. The data base's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. There are limitations. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.« less
Importance of Data Management in a Long-Term Biological Monitoring Program
NASA Astrophysics Data System (ADS)
Christensen, Sigurd W.; Brandt, Craig C.; McCracken, Mary K.
2011-06-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to meeting this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program when an existing relational database was adapted and extended to handle biological data. The database's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. We also discuss some limitations to our implementation. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.
Aagaard, Thomas; Lund, Hans; Juhl, Carsten
2016-11-22
When conducting systematic reviews, it is essential to perform a comprehensive literature search to identify all published studies relevant to the specific research question. The Cochrane Collaborations Methodological Expectations of Cochrane Intervention Reviews (MECIR) guidelines state that searching MEDLINE, EMBASE and CENTRAL should be considered mandatory. The aim of this study was to evaluate the MECIR recommendations to use MEDLINE, EMBASE and CENTRAL combined, and examine the yield of using these to find randomized controlled trials (RCTs) within the area of musculoskeletal disorders. Data sources were systematic reviews published by the Cochrane Musculoskeletal Review Group, including at least five RCTs, reporting a search history, searching MEDLINE, EMBASE, CENTRAL, and adding reference- and hand-searching. Additional databases were deemed eligible if they indexed RCTs, were in English and used in more than three of the systematic reviews. Relative recall was calculated as the number of studies identified by the literature search divided by the number of eligible studies i.e. included studies in the individual systematic reviews. Finally, cumulative median recall was calculated for MEDLINE, EMBASE and CENTRAL combined followed by the databases yielding additional studies. Deemed eligible was twenty-three systematic reviews and the databases included other than MEDLINE, EMBASE and CENTRAL was AMED, CINAHL, HealthSTAR, MANTIS, OT-Seeker, PEDro, PsychINFO, SCOPUS, SportDISCUS and Web of Science. Cumulative median recall for combined searching in MEDLINE, EMBASE and CENTRAL was 88.9% and increased to 90.9% when adding 10 additional databases. Searching MEDLINE, EMBASE and CENTRAL was not sufficient for identifying all effect studies on musculoskeletal disorders, but additional ten databases did only increase the median recall by 2%. It is possible that searching databases is not sufficient to identify all relevant references, and that reviewers must rely upon additional sources in their literature search. However further research is needed.
The Data Acquisition System of the Stockholm Educational Air Shower Array
NASA Astrophysics Data System (ADS)
Hofverberg, P.; Johansson, H.; Pearce, M.; Rydstrom, S.; Wikstrom, C.
2005-12-01
The Stockholm Educational Air Shower Array (SEASA) project is deploying an array of plastic scintillator detector stations on school roofs in the Stockholm area. Signals from GPS satellites are used to time synchronise signals from the widely separated detector stations, allowing cosmic ray air showers to be identified and studied. A low-cost and highly scalable data acquisition system has been produced using embedded Linux processors which communicate station data to a central server running a MySQL database. Air shower data can be visualised in real-time using a Java-applet client. It is also possible to query the database and manage detector stations from the client. In this paper, the design and performance of the system are described
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
Link, P.K.; Mahoney, J.B.; Bruner, D.J.; Batatian, L.D.; Wilson, Eric; Williams, F.J.C.
1995-01-01
The paper version of the Geologic map of outcrop areas of sedimentary units in the eastern part of the Hailey 1x2 Quadrangle and part of the southern part of the Challis 1x2 Quadrangle, south-central Idaho was compiled by Paul Link and others in 1995. The plate was compiled on a 1:100,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a GIS database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.
Time-Critical Database Conditions Data-Handling for the CMS Experiment
NASA Astrophysics Data System (ADS)
De Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
2011-08-01
Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.
Puncture-proof picture archiving and communication system.
Willis, C E; McCluggage, C W; Orand, M R; Parker, B R
2001-06-01
As we become increasingly dependent on our picture archiving and communications system (PACS) for the clinical practice of medicine, the demand for improved reliability becomes urgent. Borrowing principles from the discipline of Reliability Engineering, we have identified components of our system that constitute single points of failure and have endeavored to eliminate these through redundant components and manual work-around procedures. To assess the adequacy of our preparations, we have identified a set of plausible events that could interfere with the function of one or more of our PACS components. These events could be as simple as the loss of the network connection to a single component or as broad as the loss of our central data center. We have identified the need to continue to operate during adverse conditions, as well as the requirement to recover rapidly from major disruptions in service. This assessment led us to modify the physical locations of central PACS components within our physical plant. We are also taking advantage of actual disruptive events coincident with a major expansion of our facility to test our recovery procedures. Based on our recognition of the vital nature of our electronic images for patient care, we are now recording electronic images in two copies on disparate media. The image database is critical to both continued operations and recovery. Restoration of the database from periodic tape backups with a 24-hour cycle time may not support our clinical scenario: acquisition modalities have a limited local storage capacity, some of which will not contain the daily workload. Restoration of the database from the archived media is an exceedingly slow process, that will likely not meet our requirement to restore clinical operations without significant delay. Our PACS vendor is working on concurrent image databases that would be capable of nearly immediate switchover and recovery.
78 FR 69097 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-18
... (CRM) system to improve the response to correspondences from individuals seeking information from a... and private sector sources.'' The SalesForce CRM provides a centralized portal to manage frequently... topics. Depending on the topic searched, the CRM queries the database of pre-approved questions and...
Does Public Sector Control Reduce Variance in School Quality?
ERIC Educational Resources Information Center
Pritchett, Lant; Viarengo, Martina
2015-01-01
Does the government control of school systems facilitate equality in school quality? Whether centralized or localized control produces more equality depends not only on what "could" happen in principle, but also on what does happen in practice. We use the Programme for International Student Assessment (PISA) database to examine the…
Extending the ARIADNE Web-Based Learning Environment.
ERIC Educational Resources Information Center
Van Durm, Rafael; Duval, Erik; Verhoeven, Bart; Cardinaels, Kris; Olivie, Henk
One of the central notions of the ARIADNE learning platform is a share-and-reuse approach toward the development of digital course material. The ARIADNE infrastructure includes a distributed database called the Knowledge Pool System (KPS), which acts as a repository of pedagogical material, described with standardized IEEE LTSC Learning Object…
Durski, Kara N; Singaravelu, Shalini; Teo, Junxiong; Naidoo, Dhamari; Bawo, Luke; Jambai, Amara; Keita, Sakoba; Yahaya, Ali Ahmed; Muraguri, Beatrice; Ahounou, Brice; Katawera, Victoria; Kuti-George, Fredson; Nebie, Yacouba; Kohar, T Henry; Hardy, Patrick Jowlehpah; Djingarey, Mamoudou Harouna; Kargbo, David; Mahmoud, Nuha; Assefa, Yewondwossen; Condell, Orla; N'Faly, Magassouba; Van Gurp, Leon; Lamanu, Margaret; Ryan, Julia; Diallo, Boubacar; Daffae, Foday; Jackson, Dikena; Malik, Fayyaz Ahmed; Raftery, Philomena; Formenty, Pierre
2017-06-15
The international impact, rapid widespread transmission, and reporting delays during the 2014 Ebola outbreak in West Africa highlighted the need for a global, centralized database to inform outbreak response. The World Health Organization and Emerging and Dangerous Pathogens Laboratory Network addressed this need by supporting the development of a global laboratory database. Specimens were collected in the affected countries from patients and dead bodies meeting the case definitions for Ebola virus disease. Test results were entered in nationally standardized spreadsheets and consolidated onto a central server. From March 2014 through August 2016, 256343 specimens tested for Ebola virus disease were captured in the database. Thirty-one specimen types were collected, and a variety of diagnostic tests were performed. Regular analysis of data described the functionality of laboratory and response systems, positivity rates, and the geographic distribution of specimens. With data standardization and end user buy-in, the collection and analysis of large amounts of data with multiple stakeholders and collaborators across various user-access levels was made possible and contributed to outbreak response needs. The usefulness and value of a multifunctional global laboratory database is far reaching, with uses including virtual biobanking, disease forecasting, and adaption to other disease outbreaks. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.
DRUMS: a human disease related unique gene mutation search engine.
Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan
2011-10-01
With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.
The study of co-citation analysis and knowledge structure on healthcare domain
NASA Astrophysics Data System (ADS)
Chu, Kuo-Chung; Liu, Wen-I.; Tsai, Ming-Yu
2012-11-01
With the prevalence of Internet and digital archives, the online e-journal database facilitates scholars to search literature in a research domain, or to cross-search an inter-disciplined field; the key literature can be efficiently traced out. This study intends to build a Web-based citation analysis system, which consists of four modules, they are: 1) literature search module; (2) statistics module; (3) articles analysis module; and (4) co-citation analysis module. The system focuses on PubMed Central dataset that has 170,000 records. In a research domain, a specific keyword searches in terms of authors, journals, and core issues. In addition, we use data mining techniques for co-citation analysis. The results assist researchers with in-depth understanding of the domain knowledge. Having an automated system for co-citation analysis, it helps to understand changes, trends, and knowledge structure of research domain. For the best of our knowledge, the proposed system differentiates from existing online electronic retrieval database analysis function. Perhaps, the proposed system is going to be a value-added database of healthcare domain, and hope to contribute the researchers.
An Analysis of the United States Naval Aviation Schedule Removal Component (SRC) Card Process
2009-12-01
JSF has the ability to communicate in flight with its maintenance system , ALIS. Its Prognostic Health Management (PHM) System abilities allow it to...end-users. PLCS allows users of the system , through a central database, visibility of a component’s history and lifecycle data . Since both OOMA...used in PLM systems .2 This research recommends a PLM system that is Web-based and uses DoD- mandated UID technology as the future for data
Software support for Huntingtons disease research.
Conneally, P M; Gersting, J M; Gray, J M; Beidleman, K; Wexler, N S; Smith, C L
1991-01-01
Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data.
Compilation of historical water-quality data for selected springs in Texas, by ecoregion
Heitmuller, Franklin T.; Williams, Iona P.
2006-01-01
Springs are important hydrologic features in Texas. A database of about 2,000 historically documented springs and available spring-flow measurements previously has been compiled and published, but water-quality data remain scattered in published sources. This report by the U.S. Geological Survey, in cooperation with the Texas Parks and Wildlife Department, documents the compilation of data for 232 springs in Texas on the basis of a set of criteria and the development of a water-quality database for the selected springs. The selection of springs for compilation of historical water-quality data in Texas was made using existing digital and hard-copy data, responses to mailed surveys, selection criteria established by various stakeholders, geographic information systems, and digital database queries. Most springs were selected by computing the highest mean spring flows for each Texas level III ecoregion. A brief assessment of the water-quality data for springs in Texas shows that few data are available in the Arizona/New Mexico Mountains, High Plains, East Central Texas Plains, Western Gulf Coastal Plain, and South Central Plains ecoregions. Water-quality data are more abundant for the Chihuahuan Deserts, Edwards Plateau, and Texas Blackland Prairies ecoregions. Selected constituent concentrations in Texas springs, including silica, calcium, magnesium, sodium, potassium, strontium, sulfate, chloride, fluoride, nitrate (nitrogen), dissolved solids, and hardness (as calcium carbonate) are comparatively high in the Chihuahuan Deserts, Southwestern Tablelands, Central Great Plains, and Cross Timbers ecoregions, mostly as a result of subsurface geology. Comparatively low concentrations of selected constituents in Texas springs are associated with the Arizona/New Mexico Mountains, Southern Texas Plains, East Central Texas Plains, and South Central Plains ecoregions.
[A telemedicine electrocardiography system based on the component-architecture soft].
Potapov, I V; Selishchev, S V
2004-01-01
The paper deals with a universal component-oriented architecture for creating the telemedicine applications. The worked-out system ensures the ECG reading, pressure measurements and pulsometry. The system design comprises a central database server and a client telemedicine module. Data can be transmitted via different interfaces--from an ordinary local network to digital satellite phones. The data protection is guaranteed by microchip charts that were used to realize the authentication 3DES algorithm.
Group updates Gravity Database for central Andes
NASA Astrophysics Data System (ADS)
MIGRA Group; Götze, H.-J.
Between 1993 and 1995 a group of scientists from Chile, Argentina, and Germany incorporated some 2000 new gravity observations into a database that covers a remote region of the Central Andes in northern Chile and northwestern Argentina (between 64°-71°W and 20°-29°S). The database can be used to study the structure and evolution of the Andes. About 14,000 gravity values are included in the database, including older, reprocessed data. Researchers at universities or governmental agencies are welcome to use the data for noncommercial purposes.
Naval sensor data database (NSDD)
NASA Astrophysics Data System (ADS)
Robertson, Candace J.; Tubridy, Lisa H.
1999-08-01
The Naval Sensor Data database (NSDD) is a multi-year effort to archive, catalogue, and disseminate data from all types of sensors to the mine warfare, signal and image processing, and sensor development communities. The purpose is to improve and accelerate research and technology. Providing performers with the data required to develop and validate improvements in hardware, simulation, and processing will foster advances in sensor and system performance. The NSDD will provide a centralized source of sensor data in its associated ground truth, which will support an improved understanding will be benefited in the areas of signal processing, computer-aided detection and classification, data compression, data fusion, and geo-referencing, as well as sensor and sensor system design.
Ground Support Software for Spaceborne Instrumentation
NASA Technical Reports Server (NTRS)
Anicich, Vincent; Thorpe, rob; Fletcher, Greg; Waite, Hunter; Xu, Hykua; Walter, Erin; Frick, Kristie; Farris, Greg; Gell, Dave; Furman, Jufy;
2004-01-01
ION is a system of ground support software for the ion and neutral mass spectrometer (INMS) instrument aboard the Cassini spacecraft. By incorporating commercial off-the-shelf database, Web server, and Java application components, ION offers considerably more ground-support-service capability than was available previously. A member of the team that operates the INMS or a scientist who uses the data collected by the INMS can gain access to most of the services provided by ION via a standard pointand click hyperlink interface generated by almost any Web-browser program running in almost any operating system on almost any computer. Data are stored in one central location in a relational database in a non-proprietary format, are accessible in many combinations and formats, and can be combined with data from other instruments and spacecraft. The use of the Java programming language as a system-interface language offers numerous capabilities for object-oriented programming and for making the database accessible to participants using a variety of computer hardware and software.
Bernstein, Inge T; Lindorff-Larsen, Karen; Timshel, Susanne; Brandt, Carsten A; Dinesen, Birger; Fenger, Mogens; Gerdes, Anne-Marie; Iversen, Lene H; Madsen, Mogens R; Okkels, Henrik; Sunde, Lone; Rahr, Hans B; Wikman, Friedrick P; Rossing, Niels
2011-05-01
The Danish HNPCC register is a publically financed national database. The register gathers epidemiological and genomic data in HNPCC families to improve prognosis by screening and identifying family members at risk. Diagnostic data are generated throughout the country and collected over several decades. Until recently, paper-based reports were sent to the register and typed into the database. In the EC cofunded-INFOBIOMED network of excellence, the register was a model for electronic exchange of epidemiological and genomic data between diagnosing/treating departments and the central database. The aim of digitization was to optimize the organization of screening by facilitating combination of genotype-phenotype information, and to generate IT-tools sufficiently usable and generic to be implemented in other countries and for other oncogenetic diseases. The focus was on integration of heterogeneous data, elaboration, and dissemination of classification systems and development of communication standards. At the conclusion of the EU project in 2007 the system was implemented in 12 pilot departments. In the surgical departments this resulted in a 192% increase of reports to the database. Several gaps were identified: lack of standards for data to be exchanged, lack of local databases suitable for direct communication, reporting being time-consuming and dependent on interest and feedback. © 2011 Wiley-Liss, Inc.
Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.
Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R
2009-04-03
Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, William A.; Litovitz, Toby L.; Belson, Martin G.
2005-09-01
The Toxic Exposure Surveillance System (TESS) is a uniform data set of US poison centers cases. Categories of information include the patient, the caller, the exposure, the substance(s), clinical toxicity, treatment, and medical outcome. The TESS database was initiated in 1985, and provides a baseline of more than 36.2 million cases through 2003. The database has been utilized for a number of safety evaluations. Consideration of the strengths and limitations of TESS data must be incorporated into data interpretation. Real-time toxicovigilance was initiated in 2003 with continuous uploading of new cases from all poison centers to a central database. Real-timemore » toxicovigilance utilizing general and specific approaches is systematically run against TESS, further increasing the potential utility of poison center experiences as a means of early identification of potential public health threats.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
Data bases for forest inventory in the North-Central Region.
Jerold T. Hahn; Mark H. Hansen
1985-01-01
Describes the data collected by the Forest Inventory and Analysis (FIA) Research Work Unit at the North Central Forest Experiment Station. Explains how interested parties may obtain information from the databases either through direct access or by special requests to the FIA database manager.
75 FR 57437 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
... a Food Safety Education and Training Materials Database. The Database is a centralized gateway to... creating previously available education materials) (2) provide a central gateway to access the education materials (3) create a systematic and efficient method of collecting data from USDA grantees and (4) promote...
78 FR 69040 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-18
... a Food Safety Education and Training Materials Database. The Database is a centralized gateway to... creating previously available education materials), (2) provide a central gateway to access the education materials, (3) create a systematic and efficient method of collecting data from USDA grantees, and (4...
Strategies for drug delivery to the central nervous system by systemic route.
Kasinathan, Narayanan; Jagani, Hitesh V; Alex, Angel Treasa; Volety, Subrahmanyam M; Rao, J Venkata
2015-05-01
Delivery of a drug into the central nervous system (CNS) is considered difficult. Most of the drugs discovered over the past decade are biological, which are high in molecular weight and polar in nature. The delivery of such drugs across the blood-brain barrier presents problems. This review discusses some of the options available to reach the CNS by systemic route. The focus is mainly on the recent developments in systemic delivery of a drug to the CNS. Databases such as Scopus, Google scholar, Science Direct, SciFinder and online journals were referred for preparing this article including 89 references. There are at least nine strategies that could be adopted to achieve the required drug concentration in the CNS. The recent developments in drug delivery are very promising to deliver biologicals into the CNS.
Materials, processes, and environmental engineering network
NASA Technical Reports Server (NTRS)
White, Margo M.
1993-01-01
The Materials, Processes, and Environmental Engineering Network (MPEEN) was developed as a central holding facility for materials testing information generated by the Materials and Processes Laboratory. It contains information from other NASA centers and outside agencies, and also includes the NASA Environmental Information System (NEIS) and Failure Analysis Information System (FAIS) data. Environmental replacement materials information is a newly developed focus of MPEEN. This database is the NASA Environmental Information System, NEIS, which is accessible through MPEEN. Environmental concerns are addressed regarding materials identified by the NASA Operational Environment Team, NOET, to be hazardous to the environment. An environmental replacement technology database is contained within NEIS. Environmental concerns about materials are identified by NOET, and control or replacement strategies are formed. This database also contains the usage and performance characteristics of these hazardous materials. In addition to addressing environmental concerns, MPEEN contains one of the largest materials databases in the world. Over 600 users access this network on a daily basis. There is information available on failure analysis, metals and nonmetals testing, materials properties, standard and commercial parts, foreign alloy cross-reference, Long Duration Exposure Facility (LDEF) data, and Materials and Processes Selection List data.
ACToR: Aggregated Computational Toxicology Resource (T) ...
The EPA Aggregated Computational Toxicology Resource (ACToR) is a set of databases compiling information on chemicals in the environment from a large number of public and in-house EPA sources. ACToR has 3 main goals: (1) The serve as a repository of public toxicology information on chemicals of interest to the EPA, and in particular to be a central source for the testing data on all chemicals regulated by all EPA programs; (2) To be a source of in vivo training data sets for building in vitro to in vivo computational models; (3) To serve as a central source of chemical structure and identity information for the ToxCastTM and Tox21 programs. There are 4 main databases, all linked through a common set of chemical information and a common structure linking chemicals to assay data: the public ACToR system (available at http://actor.epa.gov), the ToxMiner database holding ToxCast and Tox21 data, along with results form statistical analyses on these data; the Tox21 chemical repository which is managing the ordering and sample tracking process for the larger Tox21 project; and the public version of ToxRefDB. The public ACToR system contains information on ~500K compounds with toxicology, exposure and chemical property information from >400 public sources. The web site is visited by ~1,000 unique users per month and generates ~1,000 page requests per day on average. The databases are built on open source technology, which has allowed us to export them to a number of col
Electronic Library and Other Technology "Connects" Anchorage Students.
ERIC Educational Resources Information Center
Davis, E. E. (Gene); Scott, Marilynn S.
1986-01-01
The Anchorage, Alaska, School District is dealing with the problem of teaching students about the "information age" through a unique program in their central library system. It was one of the first school districts in the nation to computerize its library and to provide access to computer databases to the students through telephones as…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-12
... System (DUNS) number and a Taxpayer Identification Number (TIN); b. Register both your DUNS number and TIN with the Central Contractor Registry (CCR), the Government's primary registrant database; c. Provide your DUNS number and TIN on your application; and d. Maintain an active CCR registration with...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... Numbering System (DUNS) number and a Taxpayer Identification Number (TIN); b. Register both your DUNS number and TIN with the Central Contractor Registry (CCR), the Government's primary registrant database; c. Provide your DUNS number and TIN on your application; and d. Maintain an active CCR registration with...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-06
... Numbering System (DUNS) number and a Taxpayer Identification Number (TIN); b. Register both your DUNS number and TIN with the Central Contractor Registry (CCR), the Government's primary registrant database; c. Provide your DUNS number and TIN on your application; and d. Maintain an active CCR registration with...
77 FR 187 - Federal Acquisition Regulation; Transition to the System for Award Management (SAM)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-03
... architecture. Deletes reference to ``business partner network'' at 4.1100, Scope, which is no longer necessary...) architecture has begun. This effort will transition the Central Contractor Registration (CCR) database, the...) to the new architecture. This case provides the first step in updating the FAR for these changes, and...
A Web-based geographic information system for monitoring animal welfare during long journeys.
Ippoliti, Carla; Di Pasquale, Adriano; Fiore, Gianluca; Savini, Lara; Conte, Annamaria; Di Gianvito, Federica; Di Francesco, Cesare
2007-01-01
Animal welfare protection during long journeys is mandatory according to European Union regulations designed to ensure that animals are transported in accordance with animal welfare requirements and to provide control bodies with a regulatory tool to react promptly in cases of non-compliance and to ensure a safe network between products, animals and farms. Regulation 1/2005/EC foresees recourse to a system of traceability within European Union member states. The Joint Research Centre of the European Commission (JRC) has developed a prototype system fulfilling the requirements of the Regulation which is able to monitor compliance with animal welfare requirements during transportation, register electronic identification of transported animals and store data in a central database shared with the other member states through a Web-based application. Test equipment has recently been installed on a vehicle that records data on vehicle position (geographic coordinates, date/time) and animal welfare conditions (measurements of internal temperature of the vehicle, etc.). The information is recorded at fixed intervals and transmitted to the central database. The authors describe the Web-based geographic information system, through which authorised users can visualise instantly the real-time position of the vehicle, monitor the sensor-recorded data and follow the time-space path of the truck during journeys.
Human Variome Project Quality Assessment Criteria for Variation Databases.
Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter
2016-06-01
Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.
Towards a Global Service Registry for the World-Wide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro
2014-06-01
The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.
Landscape features, standards, and semantics in U.S. national topographic mapping databases
Varanka, Dalia
2009-01-01
The objective of this paper is to examine the contrast between local, field-surveyed topographical representation and feature representation in digital, centralized databases and to clarify their ontological implications. The semantics of these two approaches are contrasted by examining the categorization of features by subject domains inherent to national topographic mapping. When comparing five USGS topographic mapping domain and feature lists, results indicate that multiple semantic meanings and ontology rules were applied to the initial digital database, but were lost as databases became more centralized at national scales, and common semantics were replaced by technological terms.
Software support for Huntingtons disease research.
Conneally, P. M.; Gersting, J. M.; Gray, J. M.; Beidleman, K.; Wexler, N. S.; Smith, C. L.
1991-01-01
Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data. PMID:1839672
Quality assessment and improvement of nationwide cancer registration system in Taiwan: a review.
Chiang, Chun-Ju; You, San-Lin; Chen, Chien-Jen; Yang, Ya-Wen; Lo, Wei-Cheng; Lai, Mei-Shu
2015-03-01
Cancer registration provides core information for cancer surveillance and control. The population-based Taiwan Cancer Registry was implemented in 1979. After the Cancer Control Act was promulgated in 2003, the completeness (97%) and data quality of cancer registry database has achieved at an excellent level. Hospitals with 50 or more beds, which provide outpatient and hospitalized cancer care, are recruited to report 20 items of information on all newly diagnosed cancers to the central registry office (called short-form database). The Taiwan Cancer Registry is organized and funded by the Ministry of Health and Welfare. The National Taiwan University has been contracted to operate the registry and organized an advisory board to standardize definitions of terminology, coding and procedures of the registry's reporting system since 1996. To monitor the cancer care patterns and evaluate the cancer treatment outcomes, central cancer registry has been reformed since 2002 to include detail items of the stage at diagnosis and the first course of treatment (called long-form database). There are 80 hospitals, which count for >90% of total cancer cases, involved in the long-form registration. The Taiwan Cancer Registry has run smoothly for >30 years, which provides essential foundation for academic research and cancer control policy in Taiwan. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ono, Yosuke; Ono, Sachiko; Yasunaga, Hideo; Matsui, Hiroki; Fushimi, Kiyohide; Tanaka, Yuji
2016-02-01
Thyroid storm is a life-threatening and emergent manifestation of thyrotoxicosis. However, predictive features associated with fatal outcomes in this crisis have not been clearly defined because of its rarity. The objective of this study was to investigate the associations of patient characteristics, treatments, and comorbidities with in-hospital mortality. We conducted a retrospective observational study of patients diagnosed with thyroid storm using a national inpatient database in Japan from April 1, 2011 to March 31, 2014. Of approximately 21 million inpatients in the database, we identified 1324 patients diagnosed with thyroid storm. The mean (standard deviation) age was 47 (18) years, and 943 (71.3%) patients were female. The overall in-hospital mortality was 10.1%. The number of patients was highest in the summer season. The most common comorbidity at admission was cardiovascular diseases (46.6%). Multivariable logistic regression analyses showed that higher mortality was significantly associated with older age (≥60 years), central nervous system dysfunction at admission, nonuse of antithyroid drugs and β-blockade, and requirement for mechanical ventilation and therapeutic plasma exchange combined with hemodialysis. The present study identified clinical features associated with mortality of thyroid storm using large-scale data. Physicians should pay special attention to older patients with thyrotoxicosis and coexisting central nervous system dysfunction. Future prospective studies are needed to clarify treatment options that could improve the survival outcomes of thyroid storm.
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
Latest developments for the IAGOS database: Interoperability and metadata
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume
2014-05-01
In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for intercomparison. The optimal data transfer protocol is being investigated to insure the interoperability. To facilitate satellite and model validation, tools will be made available for co-location and comparison with IAGOS. We will enhance the JOIN application in order to properly display aircraft data as vertical profiles and along individual flight tracks and to allow for graphical comparison to model results that are accessible through interoperable web services, such as the daily products from the GMES/Copernicus atmospheric service.
AphidBase: A centralized bioinformatic resource for annotation of the pea aphid genome
Legeai, Fabrice; Shigenobu, Shuji; Gauthier, Jean-Pierre; Colbourne, John; Rispe, Claude; Collin, Olivier; Richards, Stephen; Wilson, Alex C. C.; Tagu, Denis
2015-01-01
AphidBase is a centralized bioinformatic resource that was developed to facilitate community annotation of the pea aphid genome by the International Aphid Genomics Consortium (IAGC). The AphidBase Information System designed to organize and distribute genomic data and annotations for a large international community was constructed using open source software tools from the Generic Model Organism Database (GMOD). The system includes Apollo and GBrowse utilities as well as a wiki, blast search capabilities and a full text search engine. AphidBase strongly supported community cooperation and coordination in the curation of gene models during community annotation of the pea aphid genome. AphidBase can be accessed at http://www.aphidbase.com. PMID:20482635
The Database Query Support Processor (QSP)
NASA Technical Reports Server (NTRS)
1993-01-01
The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide quantitative data on the amount of effort required to implement an extended data dictionary at the network level, add new systems, adapt to changing user needs, and provide sound estimates on operations and maintenance costs and savings.
Troutman, Sandra M.; Stanley, Richard G.
2003-01-01
This database and accompanying text depict historical and modern reported occurrences of petroleum both in wells and at the surface within the boundaries of the Central Alaska Province. These data were compiled from previously published and unpublished sources and were prepared for use in the 2002 U.S. Geological Survey petroleum assessment of Central Alaska, Yukon Flats region. Indications of petroleum are described as oil or gas shows in wells, oil or gas seeps, or outcrops of oil shale or oil-bearing rock and include confirmed and unconfirmed reports. The scale of the source map limits the spatial resolution (scale) of the database to 1:2,500,000 or smaller.
[Interpretation in the Danish health-care system].
Lund Hansen, Marianne Taulo; Nielsen, Signe Smith
2013-03-04
Communication between health professional and patient is central for treatment and patient safety in the health-care system. This systematic review examines the last ten years of specialist literature concerning interpretation in the Danish health-care system. Structural search in two databases, screening of references and recommended literature from two scientists led to identification of seven relevant articles. The review showed that professional interpreters were not used consistently when needed. Family members were also used as interpreters. These results were supported by international investigations.
The aerospace energy systems laboratory: Hardware and software implementation
NASA Technical Reports Server (NTRS)
Glover, Richard D.; Oneil-Rood, Nora
1989-01-01
For many years NASA Ames Research Center, Dryden Flight Research Facility has employed automation in the servicing of flight critical aircraft batteries. Recently a major upgrade to Dryden's computerized Battery Systems Laboratory was initiated to incorporate distributed processing and a centralized database. The new facility, called the Aerospace Energy Systems Laboratory (AESL), is being mechanized with iAPX86 and iAPX286 hardware running iRMX86. The hardware configuration and software structure for the AESL are described.
Database resources of the National Center for Biotechnology Information
2015-01-01
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank® nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (Bookshelf, PubMed Central (PMC) and PubReader); medical genetics (ClinVar, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen); genes and genomics (BioProject, BioSample, dbSNP, dbVar, Epigenomics, Gene, Gene Expression Omnibus (GEO), Genome, HomoloGene, the Map Viewer, Nucleotide, PopSet, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser, Trace Archive and UniGene); and proteins and chemicals (Biosystems, COBALT, the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB), Protein Clusters, Protein and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for many of these databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:25398906
Database resources of the National Center for Biotechnology Information
2016-01-01
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank® nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (PubMed Central (PMC), Bookshelf and PubReader), health (ClinVar, dbGaP, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen), genomes (BioProject, Assembly, Genome, BioSample, dbSNP, dbVar, Epigenomics, the Map Viewer, Nucleotide, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser and the Trace Archive), genes (Gene, Gene Expression Omnibus (GEO), HomoloGene, PopSet and UniGene), proteins (Protein, the Conserved Domain Database (CDD), COBALT, Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB) and Protein Clusters) and chemicals (Biosystems and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for most of these databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:26615191
Energy Supply Options for Modernizing Army Heating Systems
1999-01-01
Army Regulation (AR) 420-49, Heating, Energy Selection and Fuel Storage, Distribution, and Dispens- ing Systems and Technical Manual (TM) 5-650...analysis. 26 USACERL TR 99/23 HEATMAP uses the AutoLISP program in AutoCAD to take the graphical input to populate a Microsoft® Access database in...of 1992, Subtitle F, Federal Agency Energy Man- agement. Technical Manual (TM) 5-650, Repairs and Utilities: Central Boiler Plants (HQDA, 13 October
Portable Map-Reduce Utility for MIT SuperCloud Environment
2015-09-17
Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is
Kappel, William M.; Sinclair, Gaylen J.; Reddy, James E.; Eckhardt, David A.; deVries, M. Peter; Phillips, Margaret E.
2012-01-01
U.S. Geological Survey (USGS) Data Rescue Program funds were used to recover data from paper records for 139 streamgages across central and western New York State; 6,133 different streamflow measurement forms, collected between 1970-80, contained field water-quality measurements. The water-quality data were entered, reviewed, and uploaded into the USGS National Water Information System. In total, 4,285 unique site visits were added to the database. The new values represent baseline water quality from which to measure change and will lead to a comparison of water-quality change over the last 40 years and into the future. Specific conductance was one of the measured properties and represents a simple way to determine if ambient inorganic water quality has been altered by anthropogenic (road salt runoff, wastewater discharges, or natural gas development) or natural sources. The objective of this report is to describe ambient specific conductance characteristics of surface water across the central and western part of New York. This report presents median specific conductance of stream discharge for the period 1970-80 and a description of the relation between specific conductance and concentrations of total dissolved solids (TDS) retrieved from the USGS National Water Information System (NWIS) database from 1955 to present. The data descriptions provide a baseline of surface-water specific conductance data that can used for comparison to current and future measurements in New York streams.
Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.
2009-01-01
Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074
Spatial configuration and distribution of forest patches in Champaign County, Illinois: 1940 to 1993
J. Danilo Chinea
1997-01-01
Spatial configuration and distribution of landscape elements have implications for the dynamics of forest ecosystems, and, therefore, for the management of these resources. The forest cover of Champaign County, in east-central Illinois, was mapped from 1940 and 1993 aerial photography and entered in a geographical information system database. In 1940, 208 forest...
VIEWDATA--Interactive Television, with Particular Emphasis on the British Post Office's PRESTEL.
ERIC Educational Resources Information Center
Rimmer, Tony
An overview of "Viewdata," an interactive medium that connects the home or business television set with a central computer database through telephone lines, is presented in this paper. It notes how Viewdata differs from broadcast Teletext systems and reviews the technical aspects of the two media to clarify terminology used in the…
Trial of real-time locating and messaging system with Bluetooth low energy.
Arisaka, Naoya; Mamorita, Noritaka; Isonaka, Risa; Kawakami, Tadashi; Takeuchi, Akihiro
2016-09-14
Hospital real-time location systems (RTLS) are increasing efficiency and reducing operational costs, but room access tags are necessary. We developed three iPhone 5 applications for an RTLS and communications using Bluetooth low energy (BLE). The applications were: Peripheral device tags, Central beacons, and a Monitor. A Peripheral communicated with a Central using BLE. The Central communicated with a Monitor using sockets on TCP/IP (Transmission Control Protocol/Internet Protocol) via a WLAN (wireless local area network). To determine a BLE threshold level for the received signal strength indicator (RSSI), relationships between signal strength and distance were measured in our laboratory and on the terrace. The BLE RSSI threshold was set at -70 dB, about 10 m. While an individual with a Peripheral moved around in a concrete building, the Peripheral was captured in a few 10-sec units at about 10 m from a Central. The Central and Monitor showed and saved the approach events, location, and Peripheral's nickname sequentially in real time. Remote Centrals also interactively communicate with Peripherals by intermediating through Monitors that found the nickname in the event database. Trial applications using BLE on iPhones worked well for patient tracking, and messaging in indoor environments.
Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.
2000-01-01
In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915
An integrated hospital information system in Geneva.
Scherrer, J R; Baud, R H; Hochstrasser, D; Ratib, O
1990-01-01
Since the initial design phase from 1971 to 1973, the DIOGENE hospital information system at the University Hospital of Geneva has been treated as a whole and has retained its architectural unity, despite the need for modification and extension over the years. In addition to having a centralized patient database with the mechanisms for data protection and recovery of a transaction-oriented system, the DIOGENE system has a centralized pool of operators who provide support and training to the users; a separate network of remote printers that provides a telex service between the hospital buildings, offices, medical departments, and wards; and a three-component structure that avoids barriers between administrative and medical applications. In 1973, after a 2-year design period, the project was approved and funded. The DIOGENE system has led to more efficient sharing of costly resources, more rapid performance of administrative tasks, and more comprehensive collection of information about the institution and its patients.
Lessons Learned from Deploying an Analytical Task Management Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen
2007-01-01
Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.
Online Bibliographic Databases in South Central Pennsylvania: Current Status and Training Needs.
ERIC Educational Resources Information Center
Townley, Charles
A survey of libraries in south central Pennsylvania was designed to identify those that are using or planning to use databases and assess their perceived training needs. This report describes the methodology and analyzes the responses received form the 57 libraries that completed the questionnaire. Data presented in eight tables are concerned with…
Román Colón, Yomayra A.; Ruppert, Leslie F.
2015-01-01
The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.
Mallik, Saurav; Maulik, Ujjwal
2015-10-01
Gene ranking is an important problem in bioinformatics. Here, we propose a new framework for ranking biomolecules (viz., miRNAs, transcription-factors/TFs and genes) in a multi-informative uterine leiomyoma dataset having both gene expression and methylation data using (statistical) eigenvector centrality based approach. At first, genes that are both differentially expressed and methylated, are identified using Limma statistical test. A network, comprising these genes, corresponding TFs from TRANSFAC and ITFP databases, and targeter miRNAs from miRWalk database, is then built. The biomolecules are then ranked based on eigenvector centrality. Our proposed method provides better average accuracy in hub gene and non-hub gene classifications than other methods. Furthermore, pre-ranked Gene set enrichment analysis is applied on the pathway database as well as GO-term databases of Molecular Signatures Database with providing a pre-ranked gene-list based on different centrality values for comparing among the ranking methods. Finally, top novel potential gene-markers for the uterine leiomyoma are provided. Copyright © 2015 Elsevier Inc. All rights reserved.
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
Volcanic observation data and simulation database at NIED, Japan (Invited)
NASA Astrophysics Data System (ADS)
Fujita, E.; Ueda, H.; Kozono, T.
2009-12-01
NIED (Nat’l Res. Inst. for Earth Sci. & Disast. Prev.) has a project to develop two volcanic database systems: (1) volcanic observation database; (2) volcanic simulation database. The volcanic observation database is the data archive center obtained by the geophysical observation networks at Mt. Fuji, Miyake, Izu-Oshima, Iwo-jima and Nasu volcanoes, central Japan. The data consist of seismic (both high-sensitivity and broadband), ground deformation (tiltmeter, GPS) and those from other sensors (e.g., rain gauge, gravimeter, magnetometer, pressure gauge.) These data is originally stored in “WIN format,” the Japanese standard format, which is also at the Hi-net (High sensitivity seismic network Japan, http://www.hinet.bosai.go.jp/). NIED joins to WOVOdat and we have prepared to upload our data, via XML format. Our concept of the XML format is 1)a common format for intermediate files to upload into the WOVOdat DB, 2) for data files downloaded from the WOVOdat DB, 3) for data exchanges between observatories without the WOVOdat DB, 4) for common data files in each observatory, 5) for data communications between systems and softwares and 6)a for softwares. NIED is now preparing for (2) the volcanic simulation database. The objective of this project is to support to develop a “real-time” hazard map, i.e., the system which is effective to evaluate volcanic hazard in case of emergency, including the up-to-date conditions. Our system will include lava flow simulation (LavaSIM) and pyroclastic flow simulation (grvcrt). The database will keep many cases of assumed simulations and we can pick up the most probable case as the first evaluation in case the eruption started. The final goals of the both database will realize the volcanic eruption prediction and forecasting in real time by the combination of monitoring data and numerical simulations.
A study of the Immune Epitope Database for some fungi species using network topological indices.
Vázquez-Prieto, Severo; Paniagua, Esperanza; Solana, Hugo; Ubeira, Florencio M; González-Díaz, Humberto
2017-08-01
In the last years, the encryption of system structure information with different network topological indices has been a very active field of research. In the present study, we assembled for the first time a complex network using data obtained from the Immune Epitope Database for fungi species, and we then considered the general topology, the node degree distribution, and the local structure of this network. We also calculated eight node centrality measures for the observed network and compared it with three theoretical models. In view of the results obtained, we may expect that the present approach can become a valuable tool to explore the complexity of this database, as well as for the storage, manipulation, comparison, and retrieval of information contained therein.
Asynchronous Data Retrieval from an Object-Oriented Database
NASA Astrophysics Data System (ADS)
Gilbert, Jonathan P.; Bic, Lubomir
We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.
Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G
2009-09-16
Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.
Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier
2017-06-13
Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching activities across different countries and institutions. References to all CCTs available in BADERI can be readily submitted to CENTRAL for their potential inclusion in systematic reviews.
Systematic review for geo-authentic Lonicerae Japonicae Flos.
Yang, Xingyue; Liu, Yali; Hou, Aijuan; Yang, Yang; Tian, Xin; He, Liyun
2017-06-01
In traditional Chinese medicine, Lonicerae Japonicae Flos is commonly used as anti-inflammatory, antiviral, and antipyretic herbal medicine, and geo-authentic herbs are believed to present the highest quality among all samples from different regions. To discuss the current situation and trend of geo-authentic Lonicerae Japonicae Flos, we searched Chinese Biomedicine Literature Database, Chinese Journal Full-text Database, Chinese Scientific Journal Full-text Database, Cochrane Central Register of Controlled Trials, Wanfang, and PubMed. We investigated all studies up to November 2015 pertaining to quality assessment, discrimination, pharmacological effects, planting or processing, or ecological system of geo-authentic Lonicerae Japonicae Flos. Sixty-five studies mainly discussing about chemical fingerprint, component analysis, planting and processing, discrimination between varieties, ecological system, pharmacological effects, and safety were systematically reviewed. By analyzing these studies, we found that the key points of geo-authentic Lonicerae Japonicae Flos research were quality and application. Further studies should focus on improving the quality by selecting the more superior of all varieties and evaluating clinical effectiveness.
Using artificial intelligence to automate remittance processing.
Adams, W T; Snow, G M; Helmick, P M
1998-06-01
The consolidated business office of the Allegheny Health Education Research Foundation (AHERF), a large integrated healthcare system based in Pittsburgh, Pennsylvania, sought to improve its cash-related business office activities by implementing an automated remittance processing system that uses artificial intelligence. The goal was to create a completely automated system whereby all monies it processed would be tracked, automatically posted, analyzed, monitored, controlled, and reconciled through a central database. Using a phased approach, the automated payment system has become the central repository for all of the remittances for seven of the hospitals in the AHERF system and has allowed for the complete integration of these hospitals' existing billing systems, document imaging system, and intranet, as well as the new automated payment posting, and electronic cash tracking and reconciling systems. For such new technology, which is designed to bring about major change, factors contributing to the project's success were adequate planning, clearly articulated objectives, marketing, end-user acceptance, and post-implementation plan revision.
NASA Astrophysics Data System (ADS)
Tumber-Davila, S. J.; Schenk, H. J.; Jackson, R. B.
2017-12-01
This synthesis examines plant rooting distributions globally, by doubling the number of entries in the Root Systems of Individual Plants database (RSIP) created by Schenk and Jackson. Root systems influence many processes, including water and nutrient uptake and soil carbon storage. Root systems also mediate vegetation responses to changing climatic and environmental conditions. Therefore, a collective understanding of the importance of rooting systems to carbon sequestration, soil characteristics, hydrology, and climate, is needed. Current global models are limited by a poor understanding of the mechanisms affecting rooting, carbon stocks, and belowground biomass. This improved database contains an extensive bank of records describing the rooting system of individual plants, as well as detailed information on the climate and environment from which the observations are made. The expanded RSIP database will: 1) increase our understanding of rooting depths, lateral root spreads and above and belowground allometry; 2) improve the representation of plant rooting systems in Earth System Models; 3) enable studies of how climate change will alter and interact with plant species and functional groups in the future. We further focus on how plant rooting behavior responds to variations in climate and the environment, and create a model that can predict rooting behavior given a set of environmental conditions. Preliminary results suggest that high potential evapotranspiration and seasonality of precipitation are indicative of deeper rooting after accounting for plant growth form. When mapping predicted deep rooting by climate, we predict deepest rooting to occur in equatorial South America, Africa, and central India.
NASA Astrophysics Data System (ADS)
Cavallari, Francesca; de Gruttola, Michele; Di Guida, Salvatore; Govi, Giacomo; Innocente, Vincenzo; Pfeiffer, Andreas; Pierro, Antonio
2011-12-01
Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are "dropped" by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications.In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.
Geospatial database for regional environmental assessment of central Colorado.
Church, Stan E.; San Juan, Carma A.; Fey, David L.; Schmidt, Travis S.; Klein, Terry L.; DeWitt, Ed H.; Wanty, Richard B.; Verplanck, Philip L.; Mitchell, Katharine A.; Adams, Monique G.; Choate, LaDonna M.; Todorov, Todor I.; Rockwell, Barnaby W.; McEachron, Luke; Anthony, Michael W.
2012-01-01
In conjunction with the future planning needs of the U.S. Department of Agriculture, Forest Service, the U.S. Geological Survey conducted a detailed environmental assessment of the effects of historical mining on Forest Service lands in central Colorado. Stream sediment, macroinvertebrate, and various filtered and unfiltered water quality samples were collected during low-flow over a four-year period from 2004–2007. This report summarizes the sampling strategy, data collection, and analyses performed on these samples. The data are presented in Geographic Information System, Microsoft Excel, and comma-delimited formats. Reports on data interpretation are being prepared separately.
Knowledge-Based Vision Techniques for the Autonomous Land Vehicle Program
1991-10-01
Knowledge System The CKS is an object-oriented knowledge database that was originally designed to serve as the central information manager for a...34 Representation Space: An Approach to the Integra- tion of Visual Information ," Proc. of DARPA Image Understanding Workshop, Palo Alto, CA, pp. 263-272, May 1989...Strat, " Information Management in a Sensor-Based Au- tonomous System," Proc. DARPA Image Understanding Workshop, University of Southern CA, Vol.1, pp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sondreal, E.A.; Mann, M.D.; Weber, G.W.
1995-12-01
On November 1-5, 1994, the Energy & Environmental Research Center (EERC) and Power Research Institute of Prague cosponsored their second conference since 1991 in the Czech Republic, entitled ``Energy and Environment: Transitions in East Central Europe.`` This conference was a continuation of the EERC`s joint commitment, initiated in 1190, to facilitate solutions to short- and long-term energy and environmental problems in East Central Europe. Production of energy from coal in an environmentally acceptable manner is a critical issue facing East Central Europe, because the region continues to rely on coal as its primary energy source. The goal of the conferencemore » was to develop partnerships between industry, government, and the research community in East Central Europe and the United States to solve energy and environmental issues in a manner that fosters economic development. Among the topics addressed at the conference were: conventional and advanced energy generation systems; economic operation of energy systems; air pollution controls; power system retrofitting and repowering, financing options; regulatory issues; energy resource options; waste utilization and disposal; and long-range environmental issues. Selected papers in the proceedings have been processed separately for inclusion in the Energy Science and Technology database.« less
The Computerized Laboratory Notebook concept for genetic toxicology experimentation and testing.
Strauss, G H; Stanford, W L; Berkowitz, S J
1989-03-01
We describe a microcomputer system utilizing the Computerized Laboratory Notebook (CLN) concept developed in our laboratory for the purpose of automating the Battery of Leukocyte Tests (BLT). The BLT was designed to evaluate blood specimens for toxic, immunotoxic, and genotoxic effects after in vivo exposure to putative mutagens. A system was developed with the advantages of low cost, limited spatial requirements, ease of use for personnel inexperienced with computers, and applicability to specific testing yet flexibility for experimentation. This system eliminates cumbersome record keeping and repetitive analysis inherent in genetic toxicology bioassays. Statistical analysis of the vast quantity of data produced by the BLT would not be feasible without a central database. Our central database is maintained by an integrated package which we have adapted to develop the CLN. The clonal assay of lymphocyte mutagenesis (CALM) section of the CLN is demonstrated. PC-Slaves expand the microcomputer to multiple workstations so that our computerized notebook can be used next to a hood while other work is done in an office and instrument room simultaneously. Communication with peripheral instruments is an indispensable part of many laboratory operations, and we present a representative program, written to acquire and analyze CALM data, for communicating with both a liquid scintillation counter and an ELISA plate reader. In conclusion we discuss how our computer system could easily be adapted to the needs of other laboratories.
32 CFR 105.15 - Defense Sexual Assault Incident Database (DSAID).
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Defense Sexual Assault Incident Database (DSAID... Sexual Assault Incident Database (DSAID). (a) Purpose. (1) In accordance with section 563 of Public Law... activities. It shall serve as a centralized, case-level database for the collection and maintenance of...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
32 CFR 105.15 - Defense Sexual Assault Incident Database (DSAID).
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 1 2014-07-01 2014-07-01 false Defense Sexual Assault Incident Database (DSAID... Sexual Assault Incident Database (DSAID). (a) Purpose. (1) In accordance with section 563 of Public Law... activities. It shall serve as a centralized, case-level database for the collection and maintenance of...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
40 CFR 1400.13 - Read-only database.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...
A spatial database for landslides in northern Bavaria: A methodological approach
NASA Astrophysics Data System (ADS)
Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit
2018-04-01
Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.
Establishment and maintenance of a standardized glioma tissue bank: Huashan experience.
Aibaidula, Abudumijiti; Lu, Jun-feng; Wu, Jin-song; Zou, He-jian; Chen, Hong; Wang, Yu-qian; Qin, Zhi-yong; Yao, Yu; Gong, Ye; Che, Xiao-ming; Zhong, Ping; Li, Shi-qi; Bao, Wei-min; Mao, Ying; Zhou, Liang-fu
2015-06-01
Cerebral glioma is the most common brain tumor as well as one of the top ten malignant tumors in human beings. In spite of the great progress on chemotherapy and radiotherapy as well as the surgery strategies during the past decades, the mortality and morbidity are still high. One of the major challenges is to explore the pathogenesis and invasion of glioma at various "omics" levels (such as proteomics or genomics) and the clinical implications of biomarkers for diagnosis, prognosis or treatment of glioma patients. Establishment of a standardized tissue bank with high quality biospecimens annotated with clinical information is pivotal to the solution of these questions as well as the drug development process and translational research on glioma. Therefore, based on previous experience of tissue banks, standardized protocols for sample collection and storage were developed. We also developed two systems for glioma patient and sample management, a local database for medical records and a local image database for medical images. For future set-up of a regional biobank network in Shanghai, we also founded a centralized database for medical records. Hence we established a standardized glioma tissue bank with sufficient clinical data and medical images in Huashan Hospital. By September, 2013, tissues samples from 1,326 cases were collected. Histological diagnosis revealed that 73 % were astrocytic tumors, 17 % were oligodendroglial tumors, 2 % were oligoastrocytic tumors, 4 % were ependymal tumors and 4 % were other central nervous system neoplasms.
2004-09-01
or emotional abuse. These data are the central focus of a grant from the National Institute on Alcohol Abuse and Alcoholism to study the...Subcutaneous Tissue 692.9 1.5% Contact dermatitis and other eczema , unspecified cause 692.9 692.9 710-739 Diseases of the Musculoskeletal System and
A Directory of Sources of Information and Data Bases on Education and Training.
1980-09-01
ACADO07 National Opinion Research Center (NORC) ... ............. ... ACADOO8 U of California Union Catalog Supp. (1963-1967...Records (RSR) ...... .................. ... ARMYO30 Union Central Registry System (UCRSYS) .... .............. ... ARMY032 Training Control Card Report...research. Your query directs a computer search of the Compre- hensive Dissertation Database. The search produces a list of all titles matching your
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
...; Information Collection; Central Contractor Registration AGENCY: Department of Defense (DOD), General Services... requirement concerning the Central Contractor Registration database. Public comments are particularly invited... Information Collection 9000- 0159, Central Contractor Registration, by any of the following methods...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
...; Information Collection; Central Contractor Registration AGENCIES: Department of Defense (DOD), General... collection requirement concerning the Central Contractor Registration database. A notice was published in the... Information Collection 9000- 0159, Central Contractor Registration, by any of the following methods...
Hatfield, Amy J; Bangert, Michael P
2005-01-01
The Indiana University School of Medicine (IUSM) Office of Medical Education &Student Services directed the IUSM Educational Technology Unit to develop a Clinical Encounters Tracking system in response to the Liaison Committee on Medical Education's (LCME) updated accreditation standards. A personal digital assistant (PDA) and centralized database server solution was implemented. Third-year medical students are required to carry a PDA on which they record clinical encounter experiences during all clerkship clinical rotations. Clinical encounters data collected on the PDAs are routinely uploaded to the central server via the PDA HotSyncing process. Real-time clinical encounter summary reports are accessed in the school's online curriculum management system: ANGEL. The resulting IUSM Clinical Encounters Tracking program addresses the LCME accreditation standard which mandates the tracking of medical students' required clinical curriculum experiences.
Faulds, James E.
2013-12-31
Over the course of the entire project, field visits were made to 117 geothermal systems in the Great Basin region. Major field excursions, incorporating visits to large groups of systems, were conducted in western Nevada, central Nevada, northwestern Nevada, northeastern Nevada, east‐central Nevada, eastern California, southern Oregon, and western Utah. For example, field excursions to the following areas included visits of multiple geothermal systems: - Northwestern Nevada: Baltazor Hot Spring, Blue Mountain, Bog Hot Spring, Dyke Hot Springs, Howard Hot Spring, MacFarlane Hot Spring, McGee Mountain, and Pinto Hot Springs in northwest Nevada. - North‐central to northeastern Nevada: Beowawe, Crescent Valley (Hot Springs Point), Dann Ranch (Hand‐me‐Down Hot Springs), Golconda, and Pumpernickel Valley (Tipton Hot Springs) in north‐central to northeast Nevada. - Eastern Nevada: Ash Springs, Chimney Hot Spring, Duckwater, Hiko Hot Spring, Hot Creek Butte, Iverson Spring, Moon River Hot Spring, Moorman Spring, Railroad Valley, and Williams Hot Spring in eastern Nevada. - Southwestern Nevada‐eastern California: Walley’s Hot Spring, Antelope Valley, Fales Hot Springs, Buckeye Hot Springs, Travertine Hot Springs, Teels Marsh, Rhodes Marsh, Columbus Marsh, Alum‐Silver Peak, Fish Lake Valley, Gabbs Valley, Wild Rose, Rawhide‐ Wedell Hot Springs, Alkali Hot Springs, and Baileys/Hicks/Burrell Hot Springs. - Southern Oregon: Alvord Hot Spring, Antelope Hot Spring‐Hart Mountain, Borax Lake, Crump Geyser, and Mickey Hot Spring in southern Oregon. - Western Utah: Newcastle, Veyo Hot Spring, Dixie Hot Spring, Thermo, Roosevelt, Cove Fort, Red Hill Hot Spring, Joseph Hot Spring, Hatton Hot Spring, and Abraham‐Baker Hot Springs. Structural controls of 426 geothermal systems were analyzed with literature research, air photos, google‐Earth imagery, and/or field reviews (Figures 1 and 2). Of the systems analyzed, we were able to determine the structural settings of more than 240 sites. However, we found that many “systems” consisted of little more than a warm or hot well in the central part of a basin. Such “systems” were difficult to evaluate in terms of structural setting in areas lacking in geophysical data. Developed database for structural catalogue in a master spreadsheet. Data components include structural setting, primary fault orientation, presence or absence of Quaternary faulting, reservoir lithology, geothermometry, presence or absence of recent magmatism, and distinguishing blind systems from those that have surface expressions. Reviewed site locations for all 426 geothermal systems– Confirmed and/or relocated spring and geothermal sites based on imagery, maps, and other information for master database. Many systems were mislocated in the original database. In addition, some systems that included several separate springs spread over large areas were divided into two or more distinct systems. Further, all hot wells were assigned names based on their location to facilitate subsequent analyses. We catalogued systems into the following eight major groups, based on the dominant pattern of faulting (Figure 1): - Major normal fault segments (i.e., near displacement maxima). - Fault bends. - Fault terminations or tips. - Step‐overs or relay ramps in normal faults. - Fault intersections. - Accommodation zones (i.e., belts of intermeshing oppositely dipping normal faults), - Displacement transfer zones whereby strike‐slip faults terminate in arrays of normal faults. - Transtensional pull‐aparts. These settings form a hierarchal pattern with respect to fault complexity. - Major normal faults and fault bends are the simplest. - Fault terminations are typically more complex than mid‐segments, as faults commonly break up into multiple strands or horsetail near their ends. - A fault intersection is generally more complex, as it generally contains both multiple fault strands and can include discrete di...
Developing New Rainfall Estimates to Identify the Likelihood of Agricultural Drought in Mesoamerica
NASA Astrophysics Data System (ADS)
Pedreros, D. H.; Funk, C. C.; Husak, G. J.; Michaelsen, J.; Peterson, P.; Lasndsfeld, M.; Rowland, J.; Aguilar, L.; Rodriguez, M.
2012-12-01
The population in Central America was estimated at ~40 million people in 2009, with 65% in rural areas directly relying on local agricultural production for subsistence, and additional urban populations relying on regional production. Mapping rainfall patterns and values in Central America is a complex task due to the rough topography and the influence of two oceans on either side of this narrow land mass. Characterization of precipitation amounts both in time and space is of great importance for monitoring agricultural food production for food security analysis. With the goal of developing reliable rainfall fields, the Famine Early warning Systems Network (FEWS NET) has compiled a dense set of historical rainfall stations for Central America through cooperation with meteorological services and global databases. The station database covers the years 1900-present with the highest density between 1970-2011. Interpolating station data by themselves does not provide a reliable result because it ignores topographical influences which dominate the region. To account for this, climatological rainfall fields were used to support the interpolation of the station data using a modified Inverse Distance Weighting process. By blending the station data with the climatological fields, a historical rainfall database was compiled for 1970-2011 at a 5km resolution for every five day interval. This new database opens the door to analysis such as the impact of sea surface temperature on rainfall patterns, changes to the typical dry spell during the rainy season, characterization of drought frequency and rainfall trends, among others. This study uses the historical database to identify the frequency of agricultural drought in the region and explores possible changes in precipitation patterns during the past 40 years. A threshold of 500mm of rainfall during the growing season was used to define agricultural drought for maize. This threshold was selected based on assessments of crop conditions from previous seasons, and was identified as an amount roughly corresponding to significant crop loss for maize, a major crop in most of the region. Results identify areas in central Honduras and Nicaragua as well as the Altiplano region in Guatemala that experienced 15 seasons of agricultural drought for the period May-July during the years 1970-2000. Preliminary results show no clear trend in rainfall, but further investigation is needed to confirm that agricultural drought is not becoming more frequent in this region.
An architecture for a brain-image database
NASA Technical Reports Server (NTRS)
Herskovits, E. H.
2000-01-01
The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.
MISSE in the Materials and Processes Technical Information System (MAPTIS )
NASA Technical Reports Server (NTRS)
Burns, DeWitt; Finckenor, Miria; Henrie, Ben
2013-01-01
Materials International Space Station Experiment (MISSE) data is now being collected and distributed through the Materials and Processes Technical Information System (MAPTIS) at Marshall Space Flight Center in Huntsville, Alabama. MISSE data has been instrumental in many programs and continues to be an important source of data for the space community. To facilitate great access to the MISSE data the International Space Station (ISS) program office and MAPTIS are working to gather this data into a central location. The MISSE database contains information about materials, samples, and flights along with pictures, pdfs, excel files, word documents, and other files types. Major capabilities of the system are: access control, browsing, searching, reports, and record comparison. The search capabilities will search within any searchable files so even if the desired meta-data has not been associated data can still be retrieved. Other functionality will continue to be added to the MISSE database as the Athena Platform is expanded
High-Performance Secure Database Access Technologies for HEP Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Vranicar; John Weicher
2006-04-17
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less
NASA Astrophysics Data System (ADS)
Munkhbaatar, B.; Lee, J.
2015-10-01
National land information system (NLIS) is an essential part of the Mongolian land reform. NLIS is a web based and centralized system which covers administration of cadastral database all over the country among land departments. Current ongoing NLIS implementation is vital to improve the cadastral system in Mongolia. This study is intended to define existing problems in current Mongolian cadastral system and propose administrative institutional and systematic implementation through NLIS. Once NLIS launches with proposed model of comprehensive cadastral system it will lead to not only economic and sustainable development but also contribute to citizens' satisfaction and lessen the burdensomeness of bureaucracy. Moreover, prevention of land conflicts, especially in metropolitan area as well as gathering land tax and fees. Furthermore after establishment of NLIS, it is advisable that connecting NLIS to other relevant state administrational organizations or institutions that have relevant database system. Connections with other relevant organizations will facilitate not only smooth and productive workflow but also offer reliable and more valuable information by its systemic integration with NLIS.
Databases as policy instruments. About extending networks as evidence-based policy.
de Bont, Antoinette; Stoevelaar, Herman; Bal, Roland
2007-12-07
This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively). We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national) bodies of professionals, resulting in restrictive prescription behavior amongst physicians. The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.
Helsens, Kenny; Colaert, Niklaas; Barsnes, Harald; Muth, Thilo; Flikka, Kristian; Staes, An; Timmerman, Evy; Wortelkamp, Steffi; Sickmann, Albert; Vandekerckhove, Joël; Gevaert, Kris; Martens, Lennart
2010-03-01
MS-based proteomics produces large amounts of mass spectra that require processing, identification and possibly quantification before interpretation can be undertaken. High-throughput studies require automation of these various steps, and management of the data in association with the results obtained. We here present ms_lims (http://genesis.UGent.be/ms_lims), a freely available, open-source system based on a central database to automate data management and processing in MS-driven proteomics analyses.
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan
2004-01-01
Lightning strike location and peak current are monitored operationally in the Kennedy Space Center (KSC)/Cape Canaveral Air Force Station (CCAFS) area by the Cloud to Ground Lightning Surveillance System (CGLSS). The present study compiles ten years of CGLSS data into a climatological database of all strikes recorded within a 20-mile radius of space shuttle launch platform LP39A, which serves as a convenient central point. The period of record (POR) for the database runs from January 1, 1993 to December 31, 2002. Histograms and cumulative probability curves are produced to determine the distribution of occurrence rates for the spectrum of strike intensities (given in kA). Further analysis of the database provides a description of both seasonal and interannual variations in the lightning distribution.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E; Madhavan, Subha
2012-06-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E
2012-01-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy. PMID:22323393
Software architecture of the Magdalena Ridge Observatory Interferometer
NASA Astrophysics Data System (ADS)
Farris, Allen; Klinglesmith, Dan; Seamons, John; Torres, Nicolas; Buscher, David; Young, John
2010-07-01
Merging software from 36 independent work packages into a coherent, unified software system with a lifespan of twenty years is the challenge faced by the Magdalena Ridge Observatory Interferometer (MROI). We solve this problem by using standardized interface software automatically generated from simple highlevel descriptions of these systems, relying only on Linux, GNU, and POSIX without complex software such as CORBA. This approach, based on gigabit Ethernet with a TCP/IP protocol, provides the flexibility to integrate and manage diverse, independent systems using a centralized supervisory system that provides a database manager, data collectors, fault handling, and an operator interface.
Construction of In-house Databases in a Corporation
NASA Astrophysics Data System (ADS)
Sano, Hikomaro
This report outlines “Repoir” (Report information retrieval) system of Toyota Central R & D Laboratories, Inc. as an example of in-house information retrieval system. The online system was designed to process in-house technical reports with the aid of a mainframe computer and has been in operation since 1979. Its features are multiple use of the information for technical and managerial purposes and simplicity in indexing and data input. The total number of descriptors, specially selected for the system, was minimized for ease of indexing. The report also describes the input items, processing flow and typical outputs in kanji letters.
Mathis, Alexander; Depaquit, Jérôme; Dvořák, Vit; Tuten, Holly; Bañuls, Anne-Laure; Halada, Petr; Zapata, Sonia; Lehrter, Véronique; Hlavačková, Kristýna; Prudhomme, Jorian; Volf, Petr; Sereno, Denis; Kaufmann, Christian; Pflüger, Valentin; Schaffner, Francis
2015-05-10
Rapid, accurate and high-throughput identification of vector arthropods is of paramount importance in surveillance programmes that are becoming more common due to the changing geographic occurrence and extent of many arthropod-borne diseases. Protein profiling by MALDI-TOF mass spectrometry fulfils these requirements for identification, and reference databases have recently been established for several vector taxa, mostly with specimens from laboratory colonies. We established and validated a reference database containing 20 phlebotomine sand fly (Diptera: Psychodidae, Phlebotominae) species by using specimens from colonies or field-collections that had been stored for various periods of time. Identical biomarker mass patterns ('superspectra') were obtained with colony- or field-derived specimens of the same species. In the validation study, high quality spectra (i.e. more than 30 evaluable masses) were obtained with all fresh insects from colonies, and with 55/59 insects deep-frozen (liquid nitrogen/-80 °C) for up to 25 years. In contrast, only 36/52 specimens stored in ethanol could be identified. This resulted in an overall sensitivity of 87 % (140/161); specificity was 100 %. Duration of storage impaired data counts in the high mass range, and thus cluster analyses of closely related specimens might reflect their storage conditions rather than phenotypic distinctness. A major drawback of MALDI-TOF MS is the restricted availability of in-house databases and the fact that mass spectrometers from 2 companies (Bruker, Shimadzu) are widely being used. We have analysed fingerprints of phlebotomine sand flies obtained by automatic routine procedure on a Bruker instrument by using our database and the software established on a Shimadzu system. The sensitivity with 312 specimens from 8 sand fly species from laboratory colonies when evaluating only high quality spectra was 98.3 %; the specificity was 100 %. The corresponding diagnostic values with 55 field-collected specimens from 4 species were 94.7 % and 97.4 %, respectively. A centralized high-quality database (created by expert taxonomists and experienced users of mass spectrometers) that is easily amenable to customer-oriented identification services is a highly desirable resource. As shown in the present work, spectra obtained from different specimens with different instruments can be analysed using a centralized database, which should be available in the near future via an online platform in a cost-efficient manner.
Implementation of the CUAHSI information system for regional hydrological research and workflow
NASA Astrophysics Data System (ADS)
Bugaets, Andrey; Gartsman, Boris; Bugaets, Nadezhda; Krasnopeyev, Sergey; Krasnopeyeva, Tatyana; Sokolov, Oleg; Gonchukov, Leonid
2013-04-01
Environmental research and education have become increasingly data-intensive as a result of the proliferation of digital technologies, instrumentation, and pervasive networks through which data are collected, generated, shared, and analyzed. Over the next decade, it is likely that science and engineering research will produce more scientific data than has been created over the whole of human history (Cox et al., 2006). Successful using these data to achieve new scientific breakthroughs depends on the ability to access, organize, integrate, and analyze these large datasets. The new project of PGI FEB RAS (http://tig.dvo.ru), FERHRI (www.ferhri.org) and Primgidromet (www.primgidromet.ru) is focused on creation of an open unified hydrological information system according to the international standards to support hydrological investigation, water management and forecasts systems. Within the hydrologic science community, the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (http://his.cuahsi.org) has been developing a distributed network of data sources and functions that are integrated using web services and that provide access to data, tools, and models that enable synthesis, visualization, and evaluation of hydrologic system behavior. Based on the top of CUAHSI technologies two first template databases were developed for primary datasets of special observations on experimental basins in the Far East Region of Russia. The first database contains data of special observation performed on the former (1957-1994) Primorskaya Water-Balance Station (1500 km2). Measurements were carried out on 20 hydrological and 40 rain gauging station and were published as special series but only as hardcopy books. Database provides raw data from loggers with hourly and daily time support. The second database called «FarEastHydro» provides published standard daily measurement performed at Roshydromet observation network (200 hydrological and meteorological stations) for the period beginning 1930 through 1990. Both of the data resources are maintained in a test mode at the project site http://gis.dvo.ru:81/, which is permanently updated. After first success, the decision was made to use the CUAHSI technology as a basis for development of hydrological information system to support data publishing and workflow of Primgidromet, the regional office of Federal State Hydrometeorological Agency. At the moment, Primgidromet observation network is equipped with 34 automatic SEBA hydrological pressure sensor pneumatic gauges PS-Light-2 and 36 automatic SEBA weather stations. Large datasets generated by sensor networks are organized and stored within a central ODM database which allows to unambiguously interpret the data with sufficient metadata and provides traceable heritage from raw measurements to useable information. Organization of the data within a central CUAHSI ODM database was the most critical step, with several important implications. This technology is widespread and well documented, and it ensures that all datasets are publicly available and readily used by other investigators and developers to support additional analyses and hydrological modeling. Implementation of ODM within a Relational Database Management System eliminates the potential data manipulation errors and intermediate the data processing steps. Wrapping CUAHSI WaterOneFlow web-service into OpenMI 2.0 linkable component (www.openmi.org) allows a seamless integration with well-known hydrological modeling systems.
Establishment of an international database for genetic variants in esophageal cancer.
Vihinen, Mauno
2016-10-01
The establishment of a database has been suggested in order to collect, organize, and distribute genetic information about esophageal cancer. The World Organization for Specialized Studies on Diseases of the Esophagus and the Human Variome Project will be in charge of a central database of information about esophageal cancer-related variations from publications, databases, and laboratories; in addition to genetic details, clinical parameters will also be included. The aim will be to get all the central players in research, clinical, and commercial laboratories to contribute. The database will follow established recommendations and guidelines. The database will require a team of dedicated curators with different backgrounds. Numerous layers of systematics will be applied to facilitate computational analyses. The data items will be extensively integrated with other information sources. The database will be distributed as open access to ensure exchange of the data with other databases. Variations will be reported in relation to reference sequences on three levels--DNA, RNA, and protein-whenever applicable. In the first phase, the database will concentrate on genetic variations including both somatic and germline variations for susceptibility genes. Additional types of information can be integrated at a later stage. © 2016 New York Academy of Sciences.
Remote monitoring of patients with implanted devices: data exchange and integration.
Van der Velde, Enno T; Atsma, Douwe E; Foeken, Hylke; Witteman, Tom A; Hoekstra, Wybo H G J
2013-06-01
Remote follow-up of implanted implantable cardioverter defibrillators (ICDs) may offer a solution to the problem of overcrowded outpatient clinics, and may also be effective in detecting clinical events early. Data obtained from remote follow up systems, as developed by all major device companies, are stored in a central database system, operated and owned by the device company. A problem now arises that the patient's clinical information is partly stored in the local electronic health record (EHR) system in the hospital, and partly in the remote monitoring database, which may potentially result in patient safety issues. To address the requirement of integrating remote monitoring data in the local EHR, the Integrating the Healthcare Enterprise (IHE) Implantable Device Cardiac Observation (IDCO) profile has been developed. This IHE IDCO profile has been adapted by all major device companies. In our hospital, we have implemented the IHE IDCO profile to import data from the remote databases from two device vendors into the departmental Cardiology Information System (EPD-Vision). Data is exchanged via a HL7/XML communication protocol, as defined in the IHE IDCO profile. By implementing the IHE IDCO profile, we have been able to integrate the data from the remote monitoring databases in our local EHRs. It can be expected that remote monitoring systems will develop into dedicated monitoring and therapy platforms. Data retrieved from these systems should form an integral part of the electronic patient record as more and more out-patient clinic care will shift to personalized care provided at a distance, in other words at the patient's home.
"Mr. Database" : Jim Gray and the History of Database Technologies.
Hanwahr, Nils C
2017-12-01
Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Preheim, Larry E.
1990-01-01
Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.
Integrated technologies for solid waste bin monitoring system.
Arebey, Maher; Hannan, M A; Basri, Hassan; Begum, R A; Abdullah, Huda
2011-06-01
The integration of communication technologies such as radio frequency identification (RFID), global positioning system (GPS), general packet radio system (GPRS), and geographic information system (GIS) with a camera are constructed for solid waste monitoring system. The aim is to improve the way of responding to customer's inquiry and emergency cases and estimate the solid waste amount without any involvement of the truck driver. The proposed system consists of RFID tag mounted on the bin, RFID reader as in truck, GPRS/GSM as web server, and GIS as map server, database server, and control server. The tracking devices mounted in the trucks collect location information in real time via the GPS. This information is transferred continuously through GPRS to a central database. The users are able to view the current location of each truck in the collection stage via a web-based application and thereby manage the fleet. The trucks positions and trash bin information are displayed on a digital map, which is made available by a map server. Thus, the solid waste of the bin and the truck are being monitored using the developed system.
Evaluating avian-habitat relationships models in mixed-conifer forests of the Sierra Nevada
Kathryn L. Purcell; Sallie J. Hejl; Terry A. Larson
1992-01-01
Using data from two studies in the southern and central Sierra Nevada, we compared the presence and abundance of bird species breeding in mixedconifer forests during 1978-79 and 1983-85 to predictions &om the California Wildlife Habitat Relationships (WHR) System. Twelve percent of the species observed in either study were not predicted by the WHR database to occur...
A Test Methodology for Evaluating Cognitive Radio Systems
2014-03-27
assumes that there are base stations which facilitate spectrum coordination by acting as spectrum brokers. Individual sensing nodes may feed local...spectrum information to the base stations [17]. The base station spectrum broker has a geolocation database of known licensed transmitters, but supplements...Sensing nodes are required to feed spectrum knowledge back to the central base station , though this act 8 does not require cognition. Instead, all
The EBI SRS server-new features.
Zdobnov, Evgeny M; Lopez, Rodrigo; Apweiler, Rolf; Etzold, Thure
2002-08-01
Here we report on recent developments at the EBI SRS server (http://srs.ebi.ac.uk). SRS has become an integration system for both data retrieval and sequence analysis applications. The EBI SRS server is a primary gateway to major databases in the field of molecular biology produced and supported at EBI as well as European public access point to the MEDLINE database provided by US National Library of Medicine (NLM). It is a reference server for latest developments in data and application integration. The new additions include: concept of virtual databases, integration of XML databases like the Integrated Resource of Protein Domains and Functional Sites (InterPro), Gene Ontology (GO), MEDLINE, Metabolic pathways, etc., user friendly data representation in 'Nice views', SRSQuickSearch bookmarklets. SRS6 is a licensed product of LION Bioscience AG freely available for academics. The EBI SRS server (http://srs.ebi.ac.uk) is a free central resource for molecular biology data as well as a reference server for the latest developments in data integration.
Chen, Po-Hao; Loehfelm, Thomas W; Kamer, Aaron P; Lemmon, Andrew B; Cook, Tessa S; Kohli, Marc D
2016-12-01
The residency review committee of the Accreditation Council of Graduate Medical Education (ACGME) collects data on resident exam volume and sets minimum requirements. However, this data is not made readily available, and the ACGME does not share their tools or methodology. It is therefore difficult to assess the integrity of the data and determine if it truly reflects relevant aspects of the resident experience. This manuscript describes our experience creating a multi-institutional case log, incorporating data from three American diagnostic radiology residency programs. Each of the three sites independently established automated query pipelines from the various radiology information systems in their respective hospital groups, thereby creating a resident-specific database. Then, the three institutional resident case log databases were aggregated into a single centralized database schema. Three hundred thirty residents and 2,905,923 radiologic examinations over a 4-year span were catalogued using 11 ACGME categories. Our experience highlights big data challenges including internal data heterogeneity and external data discrepancies faced by informatics researchers.
Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W
2018-03-09
The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements, and then send those annotations to any database in the world. Textpresso Central URL: http://www.textpresso.org/tpc.
Preparing College Students To Search Full-Text Databases: Is Instruction Necessary?
ERIC Educational Resources Information Center
Riley, Cheryl; Wales, Barbara
Full-text databases allow Central Missouri State University's clients to access some of the serials that libraries have had to cancel due to escalating subscription costs; EbscoHost, the subject of this study, is one such database. The database is available free to all Missouri residents. A survey was designed consisting of 21 questions intended…
An Integrated Korean Biodiversity and Genetic Information Retrieval System
Lim, Jeongheui; Bhak, Jong; Oh, Hee-Mock; Kim, Chang-Bae; Park, Yong-Ha; Paek, Woon Kee
2008-01-01
Background On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. Results The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic Information Retrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. Conclusion A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims to be integral in maximizing bio-resource utilization for conservation, management, research, education, industrial applications, and integration with other bioinformation data resources. It can be found at . PMID:19091024
WiFi RFID demonstration for resource tracking in a statewide disaster drill.
Cole, Stacey L; Siddiqui, Javeed; Harry, David J; Sandrock, Christian E
2011-01-01
To investigate the capabilities of Radio Frequency Identification (RFID) tracking of patients and medical equipment during a simulated disaster response scenario. RFID infrastructure was deployed at two small rural hospitals, in one large academic medical center and in two vehicles. Several item types from the mutual aid equipment list were selected for tracking during the demonstration. A central database server was installed at the UC Davis Medical Center (UCDMC) that collected RFID information from all constituent sites. The system was tested during a statewide disaster drill. During the drill, volunteers at UCDMC were selected to locate assets using the traditional method of locating resources and then using the RFID system. This study demonstrated the effectiveness of RFID infrastructure in real-time resource identification and tracking. Volunteers at UCDMC were able to locate assets substantially faster using RFID, demonstrating that real-time geolocation can be substantially more efficient and accurate than traditional manual methods. A mobile, Global Positioning System (GPS)-enabled RFID system was installed in a pediatric ambulance and connected to the central RFID database via secure cellular communication. This system is unique in that it provides for seamless region-wide tracking that adaptively uses and seamlessly integrates both outdoor cellular-based mobile tracking and indoor WiFi-based tracking. RFID tracking can provide a real-time picture of the medical situation across medical facilities and other critical locations, leading to a more coordinated deployment of resources. The RFID system deployed during this study demonstrated the potential to improve the ability to locate and track victims, healthcare professionals, and medical equipment during a region-wide disaster.
Web-based flood database for Colorado, water years 1867 through 2011
Kohn, Michael S.; Jarrett, Robert D.; Krammes, Gary S.; Mommandi, Amanullah
2013-01-01
In order to provide a centralized repository of flood information for the State of Colorado, the U.S. Geological Survey, in cooperation with the Colorado Department of Transportation, created a Web-based geodatabase for flood information from water years 1867 through 2011 and data for paleofloods occurring in the past 5,000 to 10,000 years. The geodatabase was created using the Environmental Systems Research Institute ArcGIS JavaScript Application Programing Interface 3.2. The database can be accessed at http://cwscpublic2.cr.usgs.gov/projects/coflood/COFloodMap.html. Data on 6,767 flood events at 1,597 individual sites throughout Colorado were compiled to generate the flood database. The data sources of flood information are indirect discharge measurements that were stored in U.S. Geological Survey offices (water years 1867–2011), flood data from indirect discharge measurements referenced in U.S. Geological Survey reports (water years 1884–2011), paleoflood studies from six peer-reviewed journal articles (data on events occurring in the past 5,000 to 10,000 years), and the U.S. Geological Survey National Water Information System peak-discharge database (water years 1883–2010). A number of tests were performed on the flood database to ensure the quality of the data. The Web interface was programmed using the Environmental Systems Research Institute ArcGIS JavaScript Application Programing Interface 3.2, which allows for display, query, georeference, and export of the data in the flood database. The data fields in the flood database used to search and filter the database include hydrologic unit code, U.S. Geological Survey station number, site name, county, drainage area, elevation, data source, date of flood, peak discharge, and field method used to determine discharge. Additional data fields can be viewed and exported, but the data fields described above are the only ones that can be used for queries.
Dennis M. May
1998-01-01
Discusses a regional composite approach to managing timber product output data in a relational database. Describes the development and structure of the regional composite database and demonstrates its use in addressing everyday timber product output information needs.
Descending pain modulation in irritable bowel syndrome (IBS): a systematic review and meta-analysis.
Chakiath, Rosemary J; Siddall, Philip J; Kellow, John E; Hush, Julia M; Jones, Mike P; Marcuzzi, Anna; Wrigley, Paul J
2015-12-10
Irritable bowel syndrome (IBS) is a common functional gastrointestinal disorder. While abdominal pain is a dominant symptom of IBS, many sufferers also report widespread hypersensitivity and present with other chronic pain conditions. The presence of widespread hypersensitivity and extra-intestinal pain conditions suggests central nervous dysfunction. While central nervous system dysfunction may involve the spinal cord (central sensitisation) and brain, this review will focus on one brain mechanism, descending pain modulation. We will conduct a comprehensive search for the articles indexed in the databases Ovid MEDLINE, Ovid Embase, Ovid PsycINFO and Cochrane Central Register of Controlled Trial (CENTRAL) from their inception to August 2015, that report on any aspect of descending pain modulation in irritable bowel syndrome. Two independent reviewers will screen studies for eligibility, assess risk of bias and extract relevant data. Results will be tabulated and, if possible, a meta-analysis will be carried out. The systematic review outlined in this protocol aims to summarise current knowledge regarding descending pain modulation in IBS. PROSPERO CRD42015024284.
Geochronology Database for Central Colorado
Klein, T.L.; Evans, K.V.; deWitt, E.H.
2010-01-01
This database is a compilation of published and some unpublished isotopic and fission track age determinations in central Colorado. The compiled area extends from the southern Wyoming border to the northern New Mexico border and from approximately the longitude of Denver on the east to Gunnison on the west. Data for the tephrochronology of Pleistocene volcanic ash, carbon-14, Pb-alpha, common-lead, and U-Pb determinations on uranium ore minerals have been excluded.
Virtual Queue in a Centralized Database Environment
NASA Astrophysics Data System (ADS)
Kar, Amitava; Pal, Dibyendu Kumar
2010-10-01
Today is the era of the Internet. Every matter whether it be a gather of knowledge or planning a holiday or booking of ticket etc everything can be obtained from the internet. This paper intends to calculate the different queuing measures when some booking or purchase is done through the internet subject to the limitations in the number of tickets or seats. It involves a lot of database activities like read and write. This paper takes care of the time involved in the requests of a service, taken as arrival and the time involved in providing the required information, taken as service and thereby tries to calculate the distribution of arrival and service and the various measures of the queuing. This paper considers the database as centralized database for the sake of simplicity as the alternating concept of distributed database would rather complicate the calculation.
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2004-09-01
To increase the security and throughput of ISO traffic through international terminals more technology must be applied to the problem. A transnational central archive of inspection records is discussed that can be accessed by national agencies as ISO containers approach their borders. The intent is to improve the throughput and security of the cargo inspection process. A review of currently available digital media archiving technologies is presented and their possible application to the tracking of international ISO container shipments. Specific image formats employed by current x-ray inspection systems are discussed. Sample x-ray data from systems in use today are shown that could be entered into such a system. Data from other inspection technologies are shown to be easily integrated, as well as the creation of database records suitable for interfacing with other computer systems. Overall system performance requirements are discussed in terms of security, response time and capacity. Suggestions for pilot projects based on existing border inspection processes are made also.
Angelow, Aniela; Schmidt, Matthias; Weitmann, Kerstin; Schwedler, Susanne; Vogt, Hannes; Havemann, Christoph; Hoffmann, Wolfgang
2008-07-01
In our report we describe concept, strategies and implementation of a central biosample and data management (CSDM) system in the three-centre clinical study of the Transregional Collaborative Research Centre "Inflammatory Cardiomyopathy - Molecular Pathogenesis and Therapy" SFB/TR 19, Germany. Following the requirements of high system resource availability, data security, privacy protection and quality assurance, a web-based CSDM was developed based on Java 2 Enterprise Edition using an Oracle database. An efficient and reliable sample documentation system using bar code labelling, a partitioning storage algorithm and an online documentation software was implemented. An online electronic case report form is used to acquire patient-related data. Strict rules for access to the online applications and secure connections are used to account for privacy protection and data security. Challenges for the implementation of the CSDM resided at project, technical and organisational level as well as at staff level.
Baird, Aaron; Furukawa, Michael F; Rahman, Bushra; Schneller, Eugene S
2014-01-01
Although several previous studies have found "system affiliation" to be a significant and positive predictor of health information technology (IT) adoption, little is known about the association between corporate governance practices and adoption of IT within U.S. integrated delivery systems (IDSs). Rooted in agency theory and corporate governance research, this study examines the association between corporate governance practices (centralization of IT decision rights and strategic alignment between business and IT strategy) and IT adoption, standardization, and innovation within IDSs. Cross-sectional, retrospective analyses using data from the 2011 Health Information and Management Systems Society Analytics Database on adoption within IDSs (N = 485) is used to analyze the correlation between two corporate governance constructs (centralization of IT decision rights and strategic alignment) and three IT constructs (adoption, standardization, and innovation) for clinical and supply chain IT. Multivariate fractional logit, probit, and negative binomial regressions are applied. Multivariate regressions controlling for IDS and market characteristics find that measures of IT adoption, IT standardization, and innovative IT adoption are significantly associated with centralization of IT decision rights and strategic alignment. Specifically, centralization of IT decision rights is associated with 22% higher adoption of Bar Coding for Materials Management and 30%-35% fewer IT vendors for Clinical Data Repositories and Materials Management Information Systems. A combination of centralization and clinical IT strategic alignment is associated with 50% higher Computerized Physician Order Entry adoption, and centralization along with supply chain IT strategic alignment is significantly negatively correlated with Radio Frequency Identification adoption : Although IT adoption and standardization are likely to benefit from corporate governance practices within IDSs, innovation is likely to be delayed. In addition, corporate governance is not one-size-fits-all, and contingencies are important considerations.
Annane, Djillali; Pastores, Stephen M; Arlt, Wiebke; Balk, Robert A; Beishuizen, Albertus; Briegel, Josef; Carcillo, Joseph; Christ-Crain, Mirjam; Cooper, Mark S; Marik, Paul E; Meduri, Gianfranco Umberto; Olsen, Keith M; Rochwerg, Bram; Rodgers, Sophia C; Russell, James A; Van den Berghe, Greet
2017-12-01
To provide a narrative review of the latest concepts and understanding of the pathophysiology of critical illness-related corticosteroid insufficiency (CIRCI). A multispecialty task force of international experts in critical care medicine and endocrinology and members of the Society of Critical Care Medicine (SCCM) and the European Society of Intensive Care Medicine (ESICM). Medline, Database of Abstracts of Reviews of Effects (DARE), Cochrane Central Register of Controlled Trials (CENTRAL) and the Cochrane Database of Systematic Reviews. Three major pathophysiologic events were considered to constitute CIRCI: dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis, altered cortisol metabolism, and tissue resistance to glucocorticoids. The dysregulation of the HPA axis is complex, involving multidirectional crosstalk between the CRH/ACTH pathways, autonomic nervous system, vasopressinergic system, and immune system. Recent studies have demonstrated that plasma clearance of cortisol is markedly reduced during critical illness, explained by suppressed expression and activity of the primary cortisol-metabolizing enzymes in the liver and kidney. Despite the elevated cortisol levels during critical illness, tissue resistance to glucocorticoids is believed to occur due to insufficient glucocorticoid alpha-mediated anti-inflammatory activity. Novel insights into the pathophysiology of CIRCI add to the limitations of the current diagnostic tools to identify at-risk patients and may also impact how corticosteroids are used in patients with CIRCI.
The LANL hemorrhagic fever virus database, a new platform for analyzing biothreat viruses.
Kuiken, Carla; Thurmond, Jim; Dimitrijevic, Mira; Yoon, Hyejin
2012-01-01
Hemorrhagic fever viruses (HFVs) are a diverse set of over 80 viral species, found in 10 different genera comprising five different families: arena-, bunya-, flavi-, filo- and togaviridae. All these viruses are highly variable and evolve rapidly, making them elusive targets for the immune system and for vaccine and drug design. About 55,000 HFV sequences exist in the public domain today. A central website that provides annotated sequences and analysis tools will be helpful to HFV researchers worldwide. The HFV sequence database collects and stores sequence data and provides a user-friendly search interface and a large number of sequence analysis tools, following the model of the highly regarded and widely used Los Alamos HIV database [Kuiken, C., B. Korber, and R.W. Shafer, HIV sequence databases. AIDS Rev, 2003. 5: p. 52-61]. The database uses an algorithm that aligns each sequence to a species-wide reference sequence. The NCBI RefSeq database [Sayers et al. (2011) Database resources of the National Center for Biotechnology Information. Nucleic Acids Res., 39, D38-D51.] is used for this; if a reference sequence is not available, a Blast search finds the best candidate. Using this method, sequences in each genus can be retrieved pre-aligned. The HFV website can be accessed via http://hfv.lanl.gov.
Geologic Map and Map Database of Eastern Sonoma and Western Napa Counties, California
Graymer, R.W.; Brabb, E.E.; Jones, D.L.; Barnes, J.; Nicholson, R.S.; Stamski, R.E.
2007-01-01
Introduction This report contains a new 1:100,000-scale geologic map, derived from a set of geologic map databases (Arc-Info coverages) containing information at 1:62,500-scale resolution, and a new description of the geologic map units and structural relations in the map area. Prepared as part of the San Francisco Bay Region Mapping Project, the study area includes the north-central part of the San Francisco Bay region, and forms the final piece of the effort to generate new, digital geologic maps and map databases for an area which includes Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Santa Cruz, Solano, and Sonoma Counties. Geologic mapping in Lake County in the north-central part of the map extent was not within the scope of the Project. The map and map database integrates both previously published reports and new geologic mapping and field checking by the authors (see Sources of Data index map on the map sheet or the Arc-Info coverage eswn-so and the textfile eswn-so.txt). This report contains new ideas about the geologic structures in the map area, including the active San Andreas Fault system, as well as the geologic units and their relations. Together, the map (or map database) and the unit descriptions in this report describe the composition, distribution, and orientation of geologic materials and structures within the study area at regional scale. Regional geologic information is important for analysis of earthquake shaking, liquifaction susceptibility, landslide susceptibility, engineering materials properties, mineral resources and hazards, as well as groundwater resources and hazards. These data also assist in answering questions about the geologic history and development of the California Coast Ranges.
Nosql for Storage and Retrieval of Large LIDAR Data Collections
NASA Astrophysics Data System (ADS)
Boehm, J.; Liu, K.
2015-08-01
Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.
A multi-user real time inventorying system for radioactive materials: a networking approach.
Mehta, S; Bandyopadhyay, D; Hoory, S
1998-01-01
A computerized system for radioisotope management and real time inventory coordinated across a large organization is reported. It handles hundreds of individual users and their separate inventory records. Use of highly efficient computer network and database technologies makes it possible to accept, maintain, and furnish all records related to receipt, usage, and disposal of the radioactive materials for the users separately and collectively. The system's central processor is an HP-9000/800 G60 RISC server and users from across the organization use their personal computers to login to this server using the TCP/IP networking protocol, which makes distributed use of the system possible. Radioisotope decay is automatically calculated by the program, so that it can make the up-to-date radioisotope inventory data of an entire institution available immediately. The system is specifically designed to allow use by large numbers of users (about 300) and accommodates high volumes of data input and retrieval without compromising simplicity and accuracy. Overall, it is an example of a true multi-user, on-line, relational database information system that makes the functioning of a radiation safety department efficient.
Nuclear plants gain integrated information systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villavicencio-Ramirez, A.; Rodriquez-Alvarez, J.M.
1994-10-01
With the objective of simplifying the complex mesh of computing devices employed within nuclear power plants, modern technology and integration techniques are being used to form centralized (but backed up) databases and distributed processing and display networks. Benefits are immediate as a result of the integration and the use of standards. The use of a unique data acquisition and database subsystem optimizes the high costs of engineering, as this task is done only once for the life span of the system. This also contributes towards a uniform user interface and allows for graceful expansion and maintenance. This article features anmore » integrated information system, Sistema Integral de Informacion de Proceso (SIIP). The development of this system enabled the Laguna Verde Nuclear Power plant to fully use the already existing universe of signals and its related engineering during all plant conditions, namely, start up, normal operation, transient analysis, and emergency operation. Integrated systems offer many advantages over segregated systems, and this experience should benefit similar development efforts in other electric power utilities, not only for nuclear but also for other types of generating plants.« less
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L.; Sanders, Brian; Grethe, Jeffrey S.; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W.; Martone, Maryann E.
2009-01-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov. PMID:18958629
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L; Sanders, Brian; Grethe, Jeffrey S; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W; Martone, Maryann E
2008-09-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov.
77 FR 39687 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
..., Form, and OMB Number: Defense Sexual Assault Incident Database (DSAID); OMB Control Number 0704-0482... sexual assault data collected by the Military Services. This database shall be a centralized, case-level database for the uniform collection of data regarding incidence of sexual assaults involving persons...
Code of Federal Regulations, 2010 CFR
2010-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
Code of Federal Regulations, 2014 CFR
2014-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
Code of Federal Regulations, 2012 CFR
2012-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
Code of Federal Regulations, 2013 CFR
2013-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
48 CFR 52.204-7 - Central Contractor Registration.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (CCR) database means the primary Government repository for Contractor information required for the...) for the same concern. Registered in the CCR database means that— (1) The Contractor has entered all... Federal Funding Accountability and Transparency Act of 2006 (see subpart 4.14), into the CCR database; and...
Code of Federal Regulations, 2011 CFR
2011-10-01
... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...
The Chemical Aquatic Fate and Effects (CAFE) database, developed by NOAA’s Emergency Response Division (ERD), is a centralized data repository that allows for unrestricted access to fate and effects data. While this database was originally designed to help support decisions...
Rouhani, R; Cronenberger, H; Stein, L; Hannum, W; Reed, A M; Wilhelm, C; Hsiao, H
1995-01-01
This paper describes the design, authoring, and development of interactive, computerized, multimedia clinical simulations in pediatric rheumatology/immunology and related musculoskeletal diseases, the development and implementation of a high speed information management system for their centralized storage and distribution, and analytical methods for evaluating the total system's educational impact on medical students and pediatric residents. An FDDI fiber optic network with client/server/host architecture is the core. The server houses digitized audio, still-image video clips and text files. A host station houses the DB2/2 database containing case-associated labels and information. Cases can be accessed from any workstation via a customized interface in AVA/2 written specifically for this application. OS/2 Presentation Manager controls, written in C, are incorporated into the interface. This interface allows SQL searches and retrievals of cases and case materials. In addition to providing user-directed clinical experiences, this centralized information management system provides designated faculty with the ability to add audio notes and visual pointers to image files. Users may browse through case materials, mark selected ones and download them for utilization in lectures or for editing and converting into 35mm slides.
2012-01-01
Background In the Nordic countries Denmark, Finland, Norway and Sweden, the majority of dairy herds are covered by disease recording systems, in general based on veterinary registration of diagnoses and treatments. Disease data are submitted to the national cattle databases where they are combined with, e.g., production data at cow level, and used for breeding programmes, advisory work and herd health management. Previous studies have raised questions about the quality of the disease data. The main aim of this study was to examine the country-specific completeness of the disease data, regarding clinical mastitis (CM) diagnosis, in each of the national cattle databases. A second aim was to estimate country-specific CM incidence rates (IRs). Results Over 4 months in 2008, farmers in the four Nordic countries recorded clinical diseases in their dairy cows. Their registrations were matched to registrations in the central cattle databases. The country-specific completeness of disease registrations was calculated as the proportion of farmer-recorded cases that could be found in the central database. The completeness (95% confidence interval) for veterinary-supervised cases of CM was 0.94 (0.92, 0.97), 0.56 (0.48, 0.64), 0.82 (0.75, 0.90) and 0.78 (0.70, 0.85) in Denmark, Finland, Norway and Sweden, respectively. The completeness of registration of all CM cases, which includes all cases noted by farmers, regardless of whether the cows were seen or treated by a veterinarian or not, was 0.90 (0.87, 0.93), 0.51 (0.43, 0.59), 0.75 (0.67, 0.83) and 0.67 (0.60, 0.75), respectively, in the same countries. The IRs, estimated by Poisson regression in cases per 100 cow-years, based on the farmers’ recordings, were 46.9 (41.7, 52.7), 38.6 (34.2, 43.5), 31.3 (27.2, 35.9) and 26.2 (23.2, 26.9), respectively, which was between 20% (DK) and 100% (FI) higher than the IRs based on recordings in the central cattle databases. Conclusions The completeness for veterinary-supervised cases of CM was considerably less than 100% in all four Nordic countries and differed between countries. Hence, the number of CM cases in dairy cows is underestimated. This has an impact on all areas where the disease data are used. PMID:22866606
NASA Astrophysics Data System (ADS)
Patel, M. N.; Young, K.; Halling-Brown, M. D.
2018-03-01
The demand for medical images for research is ever increasing owing to the rapid rise in novel machine learning approaches for early detection and diagnosis. The OPTIMAM Medical Image Database (OMI-DB)1,2 was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, annotations and expert-determined ground truths. Since the inception of the database in early 2011, the volume of images and associated data collected has dramatically increased owing to automation of the collection pipeline and inclusion of new sites. Currently, these data are stored at each respective collection site and synced periodically to a central store. This leads to a large data footprint at each site, requiring large physical onsite storage, which is expensive. Here, we propose an update to the OMI-DB collection system, whereby the storage of all the data is automatically transferred to the cloud on collection. This change in the data collection paradigm reduces the reliance of physical servers at each site; allows greater scope for future expansion; and removes the need for dedicated backups and improves security. Moreover, with the number of applications to access the data increasing rapidly with the maturity of the dataset cloud technology facilities faster sharing of data and better auditing of data access. Such updates, although may sound trivial; require substantial modification to the existing pipeline to ensure data integrity and security compliance. Here, we describe the extensions to the OMI-DB collection pipeline and discuss the relative merits of the new system.
Leon, Antonette E; Fabricio, Aline S C; Benvegnù, Fabio; Michilin, Silvia; Secco, Annamaria; Spangaro, Omar; Meo, Sabrina; Gion, Massimo
2011-01-01
The Nanosized Cancer Polymarker Biochip Project (RBLA03S4SP) funded by an Italian MIUR-FIRB grant (Italian Ministry of University and Research - Investment Funds for Basic Research) has led to the creation of a free-access dynamic website, available at the web address https://serviziweb.ulss12.ve.it/firbabo, and of a centralized database with password-restricted access. The project network is composed of 9 research units (RUs) and has been active since 2005. The aim of the FIRB project was the design, production and validation of optoelectronic and chemoelectronic biosensors for the simultaneous detection of a novel class of cancer biomarkers associated with immunoglobulins of the M class (IgM) for early diagnosis of cancer. Biomarker immune complexes (BM-ICs) were assessed on samples of clinical cases and matched controls for breast, colorectal, liver, ovarian and prostate malignancies. This article describes in detail the architecture of the project website, the central database application, and the biobank developed for the FIRB Nanosized Cancer Polymarker Biochip Project. The article also illustrates many unique aspects that should be considered when developing a database within a multidisciplinary scenario. The main deliverables of the project were numerous, including the development of an online database which archived 1400 case report forms (700 cases and 700 matched controls) and more than 2700 experimental results relative to the BM-ICs assayed. The database also allowed for the traceability and retrieval of 21,000 aliquots archived in the centralized bank and stored as backup in the RUs, and for the development of a centralized biological bank in the coordinating unit with 6300 aliquots of serum. The constitution of the website and biobank database enabled optimal coordination of the RUs involved, highlighting the importance of sharing samples and scientific data in a multicenter setting for the achievement of the project goals.
Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario
2004-01-01
This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.
Design considerations, architecture, and use of the Mini-Sentinel distributed data system.
Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S
2012-01-01
We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.
Distributed cyberinfrastructure tools for automated data processing of structural monitoring data
NASA Astrophysics Data System (ADS)
Zhang, Yilan; Kurata, Masahiro; Lynch, Jerome P.; van der Linden, Gwendolyn; Sederat, Hassan; Prakash, Atul
2012-04-01
The emergence of cost-effective sensing technologies has now enabled the use of dense arrays of sensors to monitor the behavior and condition of large-scale bridges. The continuous operation of dense networks of sensors presents a number of new challenges including how to manage such massive amounts of data that can be created by the system. This paper reports on the progress of the creation of cyberinfrastructure tools which hierarchically control networks of wireless sensors deployed in a long-span bridge. The internet-enabled cyberinfrastructure is centrally managed by a powerful database which controls the flow of data in the entire monitoring system architecture. A client-server model built upon the database provides both data-provider and system end-users with secured access to various levels of information of a bridge. In the system, information on bridge behavior (e.g., acceleration, strain, displacement) and environmental condition (e.g., wind speed, wind direction, temperature, humidity) are uploaded to the database from sensor networks installed in the bridge. Then, data interrogation services interface with the database via client APIs to autonomously process data. The current research effort focuses on an assessment of the scalability and long-term robustness of the proposed cyberinfrastructure framework that has been implemented along with a permanent wireless monitoring system on the New Carquinez (Alfred Zampa Memorial) Suspension Bridge in Vallejo, CA. Many data interrogation tools are under development using sensor data and bridge metadata (e.g., geometric details, material properties, etc.) Sample data interrogation clients including those for the detection of faulty sensors, automated modal parameter extraction.
Component Database for the APS Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veseli, S.; Arnold, N. D.; Jarosz, D. P.
The Advanced Photon Source Upgrade (APS-U) project will replace the existing APS storage ring with a multi-bend achromat (MBA) lattice to provide extreme transverse coherence and extreme brightness x-rays to its users. As the time to replace the existing storage ring accelerator is of critical concern, an aggressive one-year removal/installation/testing period is being planned. To aid in the management of the thousands of components to be installed in such a short time, the Component Database (CDB) application is being developed with the purpose to identify, document, track, locate, and organize components in a central database. Three major domains are beingmore » addressed: Component definitions (which together make up an exhaustive "Component Catalog"), Designs (groupings of components to create subsystems), and Component Instances (“Inventory”). Relationships between the major domains offer additional "system knowledge" to be captured that will be leveraged with future tools and applications. It is imperative to provide sub-system engineers with a functional application early in the machine design cycle. Topics discussed in this paper include the initial design and deployment of CDB, as well as future development plans.« less
HormoneBase, a population-level database of steroid hormone levels across vertebrates
Vitousek, Maren N.; Johnson, Michele A.; Donald, Jeremy W.; Francis, Clinton D.; Fuxjager, Matthew J.; Goymann, Wolfgang; Hau, Michaela; Husak, Jerry F.; Kircher, Bonnie K.; Knapp, Rosemary; Martin, Lynn B.; Miller, Eliot T.; Schoenle, Laura A.; Uehling, Jennifer J.; Williams, Tony D.
2018-01-01
Hormones are central regulators of organismal function and flexibility that mediate a diversity of phenotypic traits from early development through senescence. Yet despite these important roles, basic questions about how and why hormone systems vary within and across species remain unanswered. Here we describe HormoneBase, a database of circulating steroid hormone levels and their variation across vertebrates. This database aims to provide all available data on the mean, variation, and range of plasma glucocorticoids (both baseline and stress-induced) and androgens in free-living and un-manipulated adult vertebrates. HormoneBase (www.HormoneBase.org) currently includes >6,580 entries from 476 species, reported in 648 publications from 1967 to 2015, and unpublished datasets. Entries are associated with data on the species and population, sex, year and month of study, geographic coordinates, life history stage, method and latency of hormone sampling, and analysis technique. This novel resource could be used for analyses of the function and evolution of hormone systems, and the relationships between hormonal variation and a variety of processes including phenotypic variation, fitness, and species distributions. PMID:29786693
Database resources of the National Center for Biotechnology Information.
2016-01-04
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank(®) nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (PubMed Central (PMC), Bookshelf and PubReader), health (ClinVar, dbGaP, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen), genomes (BioProject, Assembly, Genome, BioSample, dbSNP, dbVar, Epigenomics, the Map Viewer, Nucleotide, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser and the Trace Archive), genes (Gene, Gene Expression Omnibus (GEO), HomoloGene, PopSet and UniGene), proteins (Protein, the Conserved Domain Database (CDD), COBALT, Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB) and Protein Clusters) and chemicals (Biosystems and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for most of these databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Database resources of the National Center for Biotechnology Information.
2015-01-01
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank(®) nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (Bookshelf, PubMed Central (PMC) and PubReader); medical genetics (ClinVar, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen); genes and genomics (BioProject, BioSample, dbSNP, dbVar, Epigenomics, Gene, Gene Expression Omnibus (GEO), Genome, HomoloGene, the Map Viewer, Nucleotide, PopSet, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser, Trace Archive and UniGene); and proteins and chemicals (Biosystems, COBALT, the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB), Protein Clusters, Protein and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for many of these databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
2017-01-01
Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping). PMID:29065644
Zhang, Yinsheng; Zhang, Guoming; Shang, Qian
2017-01-01
Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping).
48 CFR 52.204-7 - Central Contractor Registration.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (CCR) database means the primary Government repository for Contractor information required for the...) for the same concern. Registered in the CCR database means that— (1) The Contractor has entered all mandatory information, including the DUNS number or the DUNS+4 number, into the CCR database; and (2) The...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFSmore » and Frontier.« less
NASA Technical Reports Server (NTRS)
1992-01-01
Although the need for orthopaedic shoes is increasing, the number of skilled shoemakers has declined. This has led to the development of a CAD/CAM system to design and fabricate, orthopaedic footwear. The NASA-developed RIM database management system is the central repository for CUSTOMLAST's information storage. Several other modules also comprise the system. The project was initiated by Langley Research Center and Research Triangle Institute in cooperation with the Veterans Administration and the National Institute for Disability and Rehabilitation Research. Later development was done by North Carolina State University and the University of Missouri-Columbia. The software is licensed by both universities.
Forest cover of Champaign County, Illinois in 1993
Jesus Danilo Chinea; Louis R. Iverson
1997-01-01
The forest cover of Champaign County, in east-central Illinois, was mapped from 1993 aerial photography and entered in a geographical information system database. One hundred and six forest patches cover 3,380 ha. These patches have a mean area of 32 ha, a mean perimeter of 4,851 m, a mean perimeter to area ratio of 237, a fractal dimension of 1.59, and a mean nearest...
NASA Astrophysics Data System (ADS)
Mumladze, Tea; Wang, Haijun; Graham, Gerhard
2017-04-01
The seismic network that forms the International Monitoring System (IMS) of the Comprehensive Nuclear-test-ban Treaty Organization (CTBTO) will ultimately consist of 170 seismic stations (50 primary and 120 auxiliary) in 76 countries around the world. The Network is still under the development, but currently more than 80% of the network is in operation. The objective of seismic monitoring is to detect and locate underground nuclear explosions. However, the data from the IMS also can be widely used for scientific and civil purposes. In this study we present the results of data analysis of the seismic sequence in 2016 in Central Italy. Several hundred earthquakes were recorded for this sequence by the seismic stations of the IMS. All events were accurately located the analysts of the International Data Centre (IDC) of the CTBTO. In this study we will present the epicentral and magnitude distribution, station recordings and teleseismic phases as obtained from the Reviewed Event Bulletin (REB). We will also present a comparison of the database of the IDC with the databases of the European-Mediterranean Seismological Centre (EMSC) and U.S. Geological Survey (USGS). Present work shows that IMS data can be used for earthquake sequence analyses and can play an important role in seismological research.
PubDNA Finder: a web database linking full-text articles to sequences of nucleic acids.
García-Remesal, Miguel; Cuevas, Alejandro; Pérez-Rey, David; Martín, Luis; Anguita, Alberto; de la Iglesia, Diana; de la Calle, Guillermo; Crespo, José; Maojo, Víctor
2010-11-01
PubDNA Finder is an online repository that we have created to link PubMed Central manuscripts to the sequences of nucleic acids appearing in them. It extends the search capabilities provided by PubMed Central by enabling researchers to perform advanced searches involving sequences of nucleic acids. This includes, among other features (i) searching for papers mentioning one or more specific sequences of nucleic acids and (ii) retrieving the genetic sequences appearing in different articles. These additional query capabilities are provided by a searchable index that we created by using the full text of the 176 672 papers available at PubMed Central at the time of writing and the sequences of nucleic acids appearing in them. To automatically extract the genetic sequences occurring in each paper, we used an original method we have developed. The database is updated monthly by automatically connecting to the PubMed Central FTP site to retrieve and index new manuscripts. Users can query the database via the web interface provided. PubDNA Finder can be freely accessed at http://servet.dia.fi.upm.es:8080/pubdnafinder
Quercia, Kelly; Tran, Phuong Lien; Jinoro, Jéromine; Herniainasolo, Joséa Lea; Viviano, Manuela; Vassilakos, Pierre; Benski, Caroline; Petignat, Patrick
2018-04-01
Barriers to efficient cervical cancer screening in low- and medium-income countries include the lack of systematic monitoring of the participants' data. The aim of this study was to assess the feasibility of a mobile health (m-Health) data collection system to facilitate monitoring of women participating to cervical cancer screening campaign. Women aged 30-65 years, participating in a cervical cancer screening campaign in Ambanja, Madagascar, were invited to participate in the study. Cervical Cancer Prevention System, an m-Health application, allows the registration of clinical data, while women are undergoing cervical cancer screening. All data registered in the smartphone were transmitted onto a secure, Web-based platform through the use of an Internet connection. Healthcare providers had access to the central database and could use it for the follow-up visits. Quality of data was assessed by computing the percentage of key data missing. A total of 151 women were recruited in the study. Mean age of participants was 41.8 years. The percentage of missing data for the key variables was less than 0.02%, corresponding to one woman's medical history data, which was not sent to the central database. Technical problems, including transmission of photos, human papillomavirus test results, and pelvic examination data, have subsequently been solved through a system update. The quality of the data was satisfactory and allowed monitoring of cervical cancer screening data of participants. Larger studies evaluating the efficacy of the system for the women's follow-up are needed in order to confirm its efficiency on a long-term scale.
U.S. Army Research Laboratory (ARL) multimodal signatures database
NASA Astrophysics Data System (ADS)
Bennett, Kelly
2008-04-01
The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.
Baek, Hyunjung; Kim, Jae-Hyo; Lee, Beom-Joon
2018-01-01
Background Radiation pneumonitis is a common and serious complication of radiotherapy. Many published randomized controlled studies (RCTs) reveal a growing trend of using herbal medicines as adjuvant therapy to prevent radiation pneumonitis; however, their efficacy and safety remain unexplored. Objective The aim of this systematic review is to evaluate the efficacy and safety of herbal medicines as adjunctive therapy for the prevention of radiation pneumonitis in patients with lung cancer who undergo radiotherapy. Methods We searched the following 11 databases: three English medical databases [MEDLINE (PubMed), EMBASE, The Cochrane Central Register of Controlled Trials (CENTRAL)], five Korean medical databases (Korean Studies Information, Research information Service System, KoreaMed, DBPIA, National Digital Science Library), and three Chinese medical databases [the China National Knowledge Database (CNKI), Journal Integration Platform (VIP), and WanFang Database]. The primary outcome was the incidence of radiation pneumonitis. The risk of bias was assessed using the Cochrane risk-of-bias tool. Results Twenty-two RCTs involving 1819 participants were included. The methodological quality was poor for most of the studies. Meta-analysis showed that herbal medicines combined with radiotherapy significantly reduced the incidence of radiation pneumonitis (n = 1819; RR 0.53, 95% CI 0.45–0.63, I2 = 8%) and the incidence of severe radiation pneumonitis (n = 903; RR 0.22, 95% CI 0.11–0.41, I2 = 0%). Combined therapy also improved the Karnofsky performance score (n = 420; WMD 4.62, 95% CI 1.05–8.18, I2 = 82%). Conclusion There is some encouraging evidence that oral administration of herbal medicines combined with radiotherapy may benefit patients with lung cancer by preventing or minimizing radiation pneumonitis. However, due to the poor methodological quality of the identified studies, definitive conclusion could not be drawn. To confirm the merits of this approach, further rigorously designed large scale trials are warranted. PMID:29847598
NASA Astrophysics Data System (ADS)
Liu, Kelly H.; Elsheikh, Ahmed; Lemnifi, Awad; Purevsuren, Uranbaigal; Ray, Melissa; Refayee, Hesham; Yang, Bin B.; Yu, Youqiang; Gao, Stephen S.
2014-05-01
We present a shear wave splitting (SWS) database for the western and central United States as part of a lasting effort to build a uniform SWS database for the entire North America. The SWS measurements were obtained by minimizing the energy on the transverse component of the PKS, SKKS, and SKS phases. Each of the individual measurements was visually checked to ensure quality. This version of the database contains 16,105 pairs of splitting parameters. The data used to generate the parameters were recorded by 1774 digital broadband seismic stations over the period of 1989-2012, and represented all the available data from both permanent and portable seismic networks archived at the Incorporated Research Institutions for Seismology Data Management Center in the area of 26.00°N to 50.00°N and 125.00°W to 90.00°W. About 10,000 pairs of the measurements were from the 1092 USArray Transportable Array stations. The results show that approximately 2/3 of the fast orientations are within 30° from the absolute plate motion (APM) direction of the North American plate, and most of the largest departures with the APM are located along the eastern boundary of the western US orogenic zone and in the central Great Basins. The splitting times observed in the western US are larger than, and those in the central US are comparable with the global average of 1.0 s. The uniform database has an unprecedented spatial coverage and can be used for various investigations of the structure and dynamics of the Earth.
Clinical results of HIS, RIS, PACS integration using data integration CASE tools
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.
1995-05-01
Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.
NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations
NASA Astrophysics Data System (ADS)
Frisbie, T. E.; Hall, C. M.
2006-12-01
Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.
NASA Technical Reports Server (NTRS)
Strahler, A. H.; Woodcock, C. E.; Logan, T. L.
1983-01-01
A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.
"New Space Explosion" and Earth Observing System Capabilities
NASA Astrophysics Data System (ADS)
Stensaas, G. L.; Casey, K.; Snyder, G. I.; Christopherson, J.
2017-12-01
This presentation will describe recent developments in spaceborne remote sensing, including introduction to some of the increasing number of new firms entering the market, along with new systems and successes from established players, as well as industry consolidation reactions to these developments from communities of users. The information in this presentation will include inputs from the results of the Joint Agency Commercial Imagery Evaluation (JACIE) 2017 Civil Commercial Imagery Evaluation Workshop and the use of the US Geological Survey's Requirements Capabilities and Analysis for Earth Observation (RCA-EO) centralized Earth observing systems database and how system performance parameters are used with user science applications requirements.
Design storm prediction and hydrologic modeling using a web-GIS approach on a free-software platform
NASA Astrophysics Data System (ADS)
Castrogiovanni, E. M.; La Loggia, G.; Noto, L. V.
2005-09-01
The aim of this work has been to implement a set of procedures useful to automatise the evaluation, the design storm prediction and the flood discharge associated with a selected risk level. For this purpose a Geographic Information System has been implemented using Grass 5.0. One of the main topics of such a system is a georeferenced database of the highest intensity rainfalls and their assigned duration recorded in Sicily. This database contains the main characteristics for more than 250 raingauges, as well as the values of intense rainfall events recorded by these raingauges. These data are managed through the combined use of the PostgreSQL and GRASS-GIS 5.0 databases. Some of the best-known probability distributions have been implemented within the Geographical Information System in order to determine the point and/or areal rain values once duration and return period have been defined. The system also includes a hydrological module necessary to compute the probable flow, for a selected risk level, at points chosen by the user. A peculiarity of the system is the possibility to querying the model using a web-interface. The assumption is that the rising needs of geographic information, and dealing with the rising importance of peoples participation in the decision process, requires new forms for the diffusion of territorial data. Furthermore, technicians as well as public administrators needs to get customized and specialist data to support planning, particularly in emergencies. In this perspective a Web-interface has been developed for the hydrologic system. The aim is to allow remote users to access a centralized database and processing-power to serve the needs of knowledge without complex hardware/software infrastructures.
SISSY: An example of a multi-threaded, networked, object-oriented databased application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scipioni, B.; Liu, D.; Song, T.
1993-05-01
The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL`s systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor andmore » analyze the PDSF.« less
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Slayback, Daniel; Yager, Karina
2014-01-01
The central Andes extends from 7 deg to 21 deg S, with its eastern boundary defined by elevation (1000m and greater) and its western boundary by the coastline. The authors used a combination of surface observations, reanalysis, and the University of Utah Tropical Rainfall Measuring Mission (TRMM) precipitation features (PF) database to understand the characteristics of convective systems and associated rainfall in the central Andes during the TRMM era, 1998-2012. Compared to other dry (West Africa), mountainous (Himalayas), and dynamically linked (Amazon) regions in the tropics, the central Andes PF population was distinct from these other regions, with small and weak PFs dominating its cumulative distribution functions and annual rainfall totals. No more than 10% of PFs in the central Andes met any of the thresholds used to identify and define deep convection (minimum IR cloud-top temperatures, minimum 85-GHz brightness temperature, maximum height of the 40-dBZ echo). For most of the PFs, available moisture was limited (less than 35mm) and instability low (less than 500 J kg(exp -1)). The central Andes represents a largely stable, dry to arid environment, limiting system development and organization. Hence, primarily short-duration events (less than 60 min) characterized by shallow convection and light to light-moderate rainfall rates (0.5-4.0 mm h(exp -1)) were found.
Badisco, Liesbeth; Huybrechts, Jurgen; Simonet, Gert; Verlinden, Heleen; Marchal, Elisabeth; Huybrechts, Roger; Schoofs, Liliane; De Loof, Arnold; Vanden Broeck, Jozef
2011-03-21
The desert locust (Schistocerca gregaria) displays a fascinating type of phenotypic plasticity, designated as 'phase polyphenism'. Depending on environmental conditions, one genome can be translated into two highly divergent phenotypes, termed the solitarious and gregarious (swarming) phase. Although many of the underlying molecular events remain elusive, the central nervous system (CNS) is expected to play a crucial role in the phase transition process. Locusts have also proven to be interesting model organisms in a physiological and neurobiological research context. However, molecular studies in locusts are hampered by the fact that genome/transcriptome sequence information available for this branch of insects is still limited. We have generated 34,672 raw expressed sequence tags (EST) from the CNS of desert locusts in both phases. These ESTs were assembled in 12,709 unique transcript sequences and nearly 4,000 sequences were functionally annotated. Moreover, the obtained S. gregaria EST information is highly complementary to the existing orthopteran transcriptomic data. Since many novel transcripts encode neuronal signaling and signal transduction components, this paper includes an overview of these sequences. Furthermore, several transcripts being differentially represented in solitarious and gregarious locusts were retrieved from this EST database. The findings highlight the involvement of the CNS in the phase transition process and indicate that this novel annotated database may also add to the emerging knowledge of concomitant neuronal signaling and neuroplasticity events. In summary, we met the need for novel sequence data from desert locust CNS. To our knowledge, we hereby also present the first insect EST database that is derived from the complete CNS. The obtained S. gregaria EST data constitute an important new source of information that will be instrumental in further unraveling the molecular principles of phase polyphenism, in further establishing locusts as valuable research model organisms and in molecular evolutionary and comparative entomology.
Private and Efficient Query Processing on Outsourced Genomic Databases.
Ghasemi, Reza; Al Aziz, Md Momin; Mohammed, Noman; Dehkordi, Massoud Hadian; Jiang, Xiaoqian
2017-09-01
Applications of genomic studies are spreading rapidly in many domains of science and technology such as healthcare, biomedical research, direct-to-consumer services, and legal and forensic. However, there are a number of obstacles that make it hard to access and process a big genomic database for these applications. First, sequencing genomic sequence is a time consuming and expensive process. Second, it requires large-scale computation and storage systems to process genomic sequences. Third, genomic databases are often owned by different organizations, and thus, not available for public usage. Cloud computing paradigm can be leveraged to facilitate the creation and sharing of big genomic databases for these applications. Genomic data owners can outsource their databases in a centralized cloud server to ease the access of their databases. However, data owners are reluctant to adopt this model, as it requires outsourcing the data to an untrusted cloud service provider that may cause data breaches. In this paper, we propose a privacy-preserving model for outsourcing genomic data to a cloud. The proposed model enables query processing while providing privacy protection of genomic databases. Privacy of the individuals is guaranteed by permuting and adding fake genomic records in the database. These techniques allow cloud to evaluate count and top-k queries securely and efficiently. Experimental results demonstrate that a count and a top-k query over 40 Single Nucleotide Polymorphisms (SNPs) in a database of 20 000 records takes around 100 and 150 s, respectively.
Private and Efficient Query Processing on Outsourced Genomic Databases
Ghasemi, Reza; Al Aziz, Momin; Mohammed, Noman; Dehkordi, Massoud Hadian; Jiang, Xiaoqian
2017-01-01
Applications of genomic studies are spreading rapidly in many domains of science and technology such as healthcare, biomedical research, direct-to-consumer services, and legal and forensic. However, there are a number of obstacles that make it hard to access and process a big genomic database for these applications. First, sequencing genomic sequence is a time-consuming and expensive process. Second, it requires large-scale computation and storage systems to processes genomic sequences. Third, genomic databases are often owned by different organizations and thus not available for public usage. Cloud computing paradigm can be leveraged to facilitate the creation and sharing of big genomic databases for these applications. Genomic data owners can outsource their databases in a centralized cloud server to ease the access of their databases. However, data owners are reluctant to adopt this model, as it requires outsourcing the data to an untrusted cloud service provider that may cause data breaches. In this paper, we propose a privacy-preserving model for outsourcing genomic data to a cloud. The proposed model enables query processing while providing privacy protection of genomic databases. Privacy of the individuals is guaranteed by permuting and adding fake genomic records in the database. These techniques allow cloud to evaluate count and top-k queries securely and efficiently. Experimental results demonstrate that a count and a top-k query over 40 SNPs in a database of 20,000 records takes around 100 and 150 seconds, respectively. PMID:27834660
Development of a SNOMED CT based national medication decision support system.
Greibe, Kell
2013-01-01
Physicians often lack the time to familiarize themselves with the details of particular allergies or other drug restrictions. Clinical Decision Support (CDS), based on a structured terminology as SNOMED CT (SCT), can help physicians get an overview, by automatically alerting allergy, interactions and other important information. The centralized CDS platform based on SCT, controls Allergy, Interactions, Risk Situation Drugs and Max Dose restrictions by the help of databases developed for these specific purposes. The CDS will respond to automatic web service requests from the hospital or GP electronic medication system (EMS) during prescription, and return alerts and information. The CDS also contains a Physicians Preference Database where the physicians individually can set which kind of alerts they want to see. The result is clinically useful information physicians can use as a base for a more effective and safer treatment, without developing alert fatigue.
EPA Facility Registry System (FRS): NCES
This web feature service contains location and facility identification information from EPA's Facility Registry System (FRS) for the subset of facilities that link to the National Center for Education Statistics (NCES). The primary federal database for collecting and analyzing data related to education in the United States and other Nations, NCES is located in the U.S. Department of Education, within the Institute of Education Sciences. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA00e2??s national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to NCES school facilities once the NCES data has been integrated into the FRS database. Additional information on FRS is available at the EPA website http://www.epa.gov/enviro/html/fii/index.html.
Zouaoui, S; Rigau, V; Mathieu-Daudé, H; Darlix, A; Bessaoud, F; Fabbro-Peray, P; Bauchet, F; Kerr, C; Fabbro, M; Figarella-Branger, D; Taillandier, L; Duffau, H; Trétarre, B; Bauchet, L
2012-02-01
This work aimed at prospectively record all primary central nervous system tumor (PCNST) cases in France, for which histological diagnosis was available. The objectives were to (i) create a national database and network to perform epidemiological studies, (ii) implement clinical and basic research protocols, and (iii) harmonize the health care of patients affected by PCNST. The methodology is based on a multidisciplinary national network already established by the French Brain Tumor DataBase (FBTDB) (Recensement national histologique des tumeurs primitives du système nerveux central [RnhTPSNC]), and the active participation of the Scientific Societies involved in neuro-oncology in France. From 2004 to 2009, 43,929 cases of newly diagnosed and histologically confirmed PCNST have been recorded. Histological diagnoses included gliomas (42,4%), all other neuroepithelial tumors (4,4%), tumors of the meninges (32,3%), nerve sheath tumors (9,2%), lymphomas (3,4%) and others (8,3%). Cryopreservation was reported for 9603 PCNST specimens. Tumor resections were performed in 78% cases, while biopsies accounted for 22%. Median age at diagnosis, sex, percentage of resections and number of cryopreserved tumors were detailed for each histology, according to the WHO classification. Many current applications and perspectives for the FBTDB are illustrated in the discussion. To our knowledge, this work is the first database in Europe, dedicated to PCNST, including clinical, surgical and histological data (with also cryopreservation of the specimens), and which may have major epidemiological, clinical and research implications. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
ERIC Educational Resources Information Center
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan
2016-01-01
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…
Vlassov, Vasiliy V; Danishevskiy, Kirill D
2008-01-01
In the 20th century, Russian biomedical science experienced a decline from the blossom of the early years to a drastic state. Through the first decades of the USSR, it was transformed to suit the ideological requirements of a totalitarian state and biased directives of communist leaders. Later, depressing economic conditions and isolation from the international research community further impeded its development. Contemporary Russia has inherited a system of medical education quite different from the west as well as counterproductive regulations for the allocation of research funding. The methodology of medical and epidemiological research in Russia is largely outdated. Epidemiology continues to focus on infectious disease and results of the best studies tend to be published in international periodicals. MEDLINE continues to be the best database to search for Russian biomedical publications, despite only a small proportion being indexed. The database of the Moscow Central Medical Library is the largest national database of medical periodicals, but does not provide abstracts and full subject heading codes, and it does not cover even the entire collection of the Library. New databases and catalogs (e.g. Panteleimon) that have appeared recently are incomplete and do not enable effective searching. PMID:18826569
García-Sancho, Miguel
2011-01-01
This paper explores the introduction of professional systems engineers and information management practices into the first centralized DNA sequence database, developed at the European Molecular Biology Laboratory (EMBL) during the 1980s. In so doing, it complements the literature on the emergence of an information discourse after World War II and its subsequent influence in biological research. By the careers of the database creators and the computer algorithms they designed, analyzing, from the mid-1960s onwards information in biology gradually shifted from a pervasive metaphor to be embodied in practices and professionals such as those incorporated at the EMBL. I then investigate the reception of these database professionals by the EMBL biological staff, which evolved from initial disregard to necessary collaboration as the relationship between DNA, genes, and proteins turned out to be more complex than expected. The trajectories of the database professionals at the EMBL suggest that the initial subject matter of the historiography of genomics should be the long-standing practices that emerged after World War II and to a large extent originated outside biomedicine and academia. Only after addressing these practices, historians may turn to their further disciplinary assemblage in fields such as bioinformatics or biotechnology.
Vlassov, Vasiliy V; Danishevskiy, Kirill D
2008-09-30
In the 20th century, Russian biomedical science experienced a decline from the blossom of the early years to a drastic state. Through the first decades of the USSR, it was transformed to suit the ideological requirements of a totalitarian state and biased directives of communist leaders. Later, depressing economic conditions and isolation from the international research community further impeded its development. Contemporary Russia has inherited a system of medical education quite different from the west as well as counterproductive regulations for the allocation of research funding. The methodology of medical and epidemiological research in Russia is largely outdated. Epidemiology continues to focus on infectious disease and results of the best studies tend to be published in international periodicals. MEDLINE continues to be the best database to search for Russian biomedical publications, despite only a small proportion being indexed. The database of the Moscow Central Medical Library is the largest national database of medical periodicals, but does not provide abstracts and full subject heading codes, and it does not cover even the entire collection of the Library. New databases and catalogs (e.g. Panteleimon) that have appeared recently are incomplete and do not enable effective searching.
Simulation and management games for training command and control in emergencies.
Levi, Leon; Bregman, David
2003-01-01
The aim of our project was to introduce and implement simulation techniques in a problematic field of increasing health care system preparedness for disasters. This field was chosen as knowledge is gained by few experienced staff members who need to disperse it to others during the busy routine work of the system personnel. Knowledge management techniques ranging from classifying the current data, centralized organizational knowledge storage and using it for decision making and dispersing it through the organization--were used in this project. In the first stage we analyzed the current system of building a preparedness protocol (set of orders). We identified the pitfalls of changing personnel and loosing knowledge gained through lessons from local and national experience. For this stage we developed a database of resources and objects (casualties) to be used in the simulation in different possibilities. One of those was the differentiation between drills with trainer and those in front of computers enable to set the needed solution. The model rules for different scenarios of multi-casualty incidents from conventional warfare trauma to combined chemical/toxicological as well as, levels of care pre and inside hospitals--were incorporated to the database management system (we used Microsoft Access' DBMS). The hardware for management game was comprised of serial computers with network and possibility of projection of scenes. For prehospital phase the possibility of portable PC's and connections to central server was used to assess bidirectional flow of information. Simulation software (ARENA) and graphical interfase (Visual Basic, GUI) as shown in the attached figure. We hereby conclude that our system provides solutions which are in use in different levels of healthcare system to assess and improve management command and control for different scenarios of multi-casualty incidents.
Acupuncture for treating sciatica: a systematic review protocol
Qin, Zongshi; Liu, Xiaoxu; Yao, Qin; Zhai, Yanbing; Liu, Zhishun
2015-01-01
Introduction This systematic review aims to assess the effectiveness and safety of acupuncture for treating sciatica. Methods The following nine databases will be searched from their inception to 30 October 2014: MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials (CENTRAL), the Chinese Biomedical Literature Database (CBM), the Chinese Medical Current Content (CMCC), the Chinese Scientific Journal Database (VIP database), the Wan-Fang Database, the China National Knowledge Infrastructure (CNKI) and Citation Information by National Institute of Informatics (CiNii). Randomised controlled trials (RCTs) of acupuncture for sciatica in English, Chinese or Japanese without restriction of publication status will be included. Two researchers will independently undertake study selection, extraction of data and assessment of study quality. Meta-analysis will be conducted after screening of studies. Data will be analysed using risk ratio for dichotomous data, and standardised mean difference or weighted mean difference for continuous data. Dissemination This systematic review will be disseminated electronically through a peer-reviewed publication or conference presentations. Trial registration number PROSPERO CRD42014015001. PMID:25922105
Meta-All: a system for managing metabolic pathway information.
Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H
2006-10-23
Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at http://bic-gh.de/meta-all and can be downloaded free of charge and installed locally.
Meta-All: a system for managing metabolic pathway information
Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H
2006-01-01
Background Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. Results We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. Conclusion META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at and can be downloaded free of charge and installed locally. PMID:17059592
Vasileiou, Eleftheria; Sheikh, Aziz; Butler, Chris; von Wissmann, Beatrix; McMenamin, Jim; Ritchie, Lewis; Tian, Lilly; Simpson, Colin
2016-01-01
Introduction Influenza vaccination is administered annually as a preventive measure against influenza infection and influenza-related complications in high-risk individuals, such as those with asthma. However, the effectiveness of influenza vaccination in people with asthma against influenza-related complications is still not well established. Methods and analysis We will search the following databases: MEDLINE (Ovid), EMBASE (Ovid), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Central Register of Controlled Trials (CENTRAL), Scopus, Cochrane Database of Systematic Reviews (CDSR), Web of Science Core Collection, Science direct, WHO Library Information System (WHOLIS), Global Health Library and Chinese databases (CNKI, Wanfang and ChongQing VIP) from Jan 1970 to Jan 2016 for observational and experimental studies on effectiveness of influenza vaccine in people with asthma. The identification of studies will be complemented with the searching of the reference lists and citations, and contacting influenza vaccine manufacturers to identify unpublished or ongoing studies. Two reviewers will extract data and appraise the quality of each study independently. Separate meta-analyses will be undertaken for observational and experimental evidence using fixed-effect or random-effects models, as appropriate. Ethics and dissemination Formal ethical approval is not required, as primary data will not be collected. The review will be disseminated in peer-reviewed publications and conference presentations. PMID:27026658
The LANL hemorrhagic fever virus database, a new platform for analyzing biothreat viruses
Kuiken, Carla; Thurmond, Jim; Dimitrijevic, Mira; Yoon, Hyejin
2012-01-01
Hemorrhagic fever viruses (HFVs) are a diverse set of over 80 viral species, found in 10 different genera comprising five different families: arena-, bunya-, flavi-, filo- and togaviridae. All these viruses are highly variable and evolve rapidly, making them elusive targets for the immune system and for vaccine and drug design. About 55 000 HFV sequences exist in the public domain today. A central website that provides annotated sequences and analysis tools will be helpful to HFV researchers worldwide. The HFV sequence database collects and stores sequence data and provides a user-friendly search interface and a large number of sequence analysis tools, following the model of the highly regarded and widely used Los Alamos HIV database [Kuiken, C., B. Korber, and R.W. Shafer, HIV sequence databases. AIDS Rev, 2003. 5: p. 52–61]. The database uses an algorithm that aligns each sequence to a species-wide reference sequence. The NCBI RefSeq database [Sayers et al. (2011) Database resources of the National Center for Biotechnology Information. Nucleic Acids Res., 39, D38–D51.] is used for this; if a reference sequence is not available, a Blast search finds the best candidate. Using this method, sequences in each genus can be retrieved pre-aligned. The HFV website can be accessed via http://hfv.lanl.gov. PMID:22064861
The MycoBrowser portal: a comprehensive and manually annotated resource for mycobacterial genomes.
Kapopoulou, Adamandia; Lew, Jocelyne M; Cole, Stewart T
2011-01-01
In this paper, we present the MycoBrowser portal (http://mycobrowser.epfl.ch/), a resource that provides both in silico generated and manually reviewed information within databases dedicated to the complete genomes of Mycobacterium tuberculosis, Mycobacterium leprae, Mycobacterium marinum and Mycobacterium smegmatis. A central component of MycoBrowser is TubercuList (http://tuberculist.epfl.ch), which has recently benefited from a new data management system and web interface. These improvements were extended to all MycoBrowser databases. We provide an overview of the functionalities available and the different ways of interrogating the data then discuss how both the new information and the latest features are helping the mycobacterial research communities. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuszak, M; Anderson, C; Lee, C
Purpose: With electronic medical records, patient information for the treatment planning process has become disseminated across multiple applications with limited quality control and many associated failure modes. We present the development of a single application with a centralized database to manage the planning process. Methods: The system was designed to replace current functionalities of (i) static directives representing the physician intent for the prescription and planning goals, localization information for delivery, and other information, (ii) planning objective reports, (iii) localization and image guidance documents and (iv) the official radiation therapy prescription in the medical record. Using the Eclipse Scripting Applicationmore » Programming Interface, a plug-in script with an associated domain-specific SQL Server database was created to manage the information in (i)–(iv). The system’s user interface and database were designed by a team of physicians, clinical physicists, database experts, and software engineers to ensure usability and robustness for clinical use. Results: The resulting system has been fully integrated within the TPS via a custom script and database. Planning scenario templates, version control, approvals, and logic-based quality control allow this system to fully track and document the planning process as well as physician approval of tradeoffs while improving the consistency of the data. Multiple plans and prescriptions are supported along with non-traditional dose objectives and evaluation such as biologically corrected models, composite dose limits, and management of localization goals. User-specific custom views were developed for the attending physician review, physicist plan checks, treating therapists, and peer review in chart rounds. Conclusion: A method was developed to maintain cohesive information throughout the planning process within one integrated system by using a custom treatment planning management application that interfaces directly with the TPS. Future work includes quantifying the improvements in quality, safety and efficiency that are possible with the routine clinical use of this system. Supported in part by NIH-P01-CA-059827.« less
Medication-use evaluation with a Web application.
Burk, Muriel; Moore, Von; Glassman, Peter; Good, Chester B; Emmendorfer, Thomas; Leadholm, Thomas C; Cunningham, Francesca
2013-12-15
A Web-based application for coordinating medication-use evaluation (MUE) initiatives within the Veterans Affairs (VA) health care system is described. The MUE Tracker (MUET) software program was created to improve VA's ability to conduct national medication-related interventions throughout its network of 147 medical centers. MUET initiatives are centrally coordinated by the VA Center for Medication Safety (VAMedSAFE), which monitors the agency's integrated databases for indications of suboptimal prescribing or drug therapy monitoring and adverse treatment outcomes. When a pharmacovigilance signal is detected, VAMedSAFE identifies "trigger groups" of at-risk veterans and uploads patient lists to the secure MUET application, where locally designated personnel (typically pharmacists) can access and use the data to target risk-reduction efforts. Local data on patient-specific interventions are stored in a centralized database and regularly updated to enable tracking and reporting for surveillance and quality-improvement purposes; aggregated data can be further analyzed for provider education and benchmarking. In a three-year pilot project, the MUET program was found effective in promoting improved prescribing of erythropoiesis-stimulating agents (ESAs) and enhanced laboratory monitoring of ESA-treated patients in all specified trigger groups. The MUET initiative has since been expanded to target other high-risk drugs, and efforts are underway to refine the tool for broader utility. The MUET application has enabled the increased standardization of medication safety initiatives across the VA system and may serve as a useful model for the development of pharmacovigilance tools by other large integrated health care systems.
NASA Astrophysics Data System (ADS)
Deshpande, Ruchi; Thuptimdang, Wanwara; DeMarco, John; Liu, Brent J.
2014-03-01
We have built a decision support system that provides recommendations for customizing radiation therapy treatment plans, based on patient models generated from a database of retrospective planning data. This database consists of relevant metadata and information derived from the following DICOM objects - CT images, RT Structure Set, RT Dose and RT Plan. The usefulness and accuracy of such patient models partly depends on the sample size of the learning data set. Our current goal is to increase this sample size by expanding our decision support system into a collaborative framework to include contributions from multiple collaborators. Potential collaborators are often reluctant to upload even anonymized patient files to repositories outside their local organizational network in order to avoid any conflicts with HIPAA Privacy and Security Rules. We have circumvented this problem by developing a tool that can parse DICOM files on the client's side and extract de-identified numeric and text data from DICOM RT headers for uploading to a centralized system. As a result, the DICOM files containing PHI remain local to the client side. This is a novel workflow that results in adding only relevant yet valuable data from DICOM files to the centralized decision support knowledge base in such a way that the DICOM files never leave the contributor's local workstation in a cloud-based environment. Such a workflow serves to encourage clinicians to contribute data for research endeavors by ensuring protection of electronic patient data.
Alavi, Seyed Mohammad; Alavi, Leila
2016-01-01
Human toxoplasmosis is an important zoonotic infection worldwide which is caused by the intracellular parasite Toxoplasma gondii (T.gondii). The aim of this study was to review briefly the general aspects of toxoplasma infection in in Iranian health system network. We searched published toxoplasmosis related articles in English databases including Science Direct, Pub Med, Scopus, Google Scholar, Magiran, Iran Medex, Iran Doc and Scientific Information Database (SID) for toxoplasmosis. Out of 1267 articles from the English and Persian databases search, 40 articles were suitable with our research objectives and so were selected for the study. It is estimated that at least a third of the world human population is infected with T.gondii, suggesting it as one of the most common parasitic infections through the world. Maternal infection during pregnancy may affect dangerous outcome for the fetus, or even cause intrauterine death. Reactivation of a previous infection in immunocompromised patient such as drug induced, AIDS and organ transplantation can cause life-threating central nervous system infection. Ocular toxoplasmosis is one of the most important causes of blindness, especially in individuals with a deficient immune system. According to the increasing burden of toxoplasmosis on human health, the findings of this study highlight the appropriate preventive measures, diagnosis, and management of this disease.
Kinslow, Connor J; Rajpara, Raj S; Wu, Cheng-Chia; Bruce, Samuel S; Canoll, Peter D; Wang, Shih-Hsiu; Sonabend, Adam M; Sheth, Sameer A; McKhann, Guy M; Sisti, Michael B; Bruce, Jeffrey N; Wang, Tony J C
2017-06-01
Meningeal hemangiopericytoma (m-HPC) is a rare tumor of the central nervous system (CNS), which is distinguished clinically from meningioma by its tendency to recur and metastasize. The histological classification and grading scheme for m-HPC is still evolving and few studies have identified tumor features that are associated with metastasis. All patients at our institution with m-HPC were assessed for patient, tumor, and treatment characteristics associated with survival, recurrence, and metastasis. New findings were validated using the SEER database. Twenty-seven patients were identified in our institutional records with m-HPC with a median follow-up time of 85 months. Invasiveness was the strongest predictor of decreased overall survival (OS) and decreased metastasis-free survival (MFS) (p = 0.004 and 0.001). On subgroup analysis, bone invasion trended towards decreased OS (p = 0.056). Bone invasion and soft tissue invasion were significantly associated with decreased MFS (p = 0.001 and 0.012). An additional 315 patients with m-HPC were identified in the SEER database that had information on tumor invasion and 263 with information on distant metastasis. Invasion was significantly associated with decreased survival (HR = 5.769, p = 0.007) and metastasis (OR 134, p = 0.000) in the SEER data. In this study, the authors identified a previously unreported tumor characteristic, invasiveness, as the strongest factor associated with decreased survival and metastasis. The association of invasion with decreased survival and metastasis was confirmed in a separate, larger, publicly available database. Invasion may be a useful parameter in the histological grading and clinical management of hemangiopericytoma of the CNS.
Interfacing with the nervous system: a review of current bioelectric technologies.
Sahyouni, Ronald; Mahmoodi, Amin; Chen, Jefferson W; Chang, David T; Moshtaghi, Omid; Djalilian, Hamid R; Lin, Harrison W
2017-10-23
The aim of this study is to discuss the state of the art with regard to established or promising bioelectric therapies meant to alter or control neurologic function. We present recent reports on bioelectric technologies that interface with the nervous system at three potential sites-(1) the end organ, (2) the peripheral nervous system, and (3) the central nervous system-while exploring practical and clinical considerations. A literature search was executed on PubMed, IEEE, and Web of Science databases. A review of the current literature was conducted to examine functional and histomorphological effects of neuroprosthetic interfaces with a focus on end-organ, peripheral, and central nervous system interfaces. Innovations in bioelectric technologies are providing increasing selectivity in stimulating distinct nerve fiber populations in order to activate discrete muscles. Significant advances in electrode array design focus on increasing selectivity, stability, and functionality of implantable neuroprosthetics. The application of neuroprosthetics to paretic nerves or even directly stimulating or recording from the central nervous system holds great potential in advancing the field of nerve and tissue bioelectric engineering and contributing to clinical care. Although current physiotherapeutic and surgical treatments seek to restore function, structure, or comfort, they bear significant limitations in enabling cosmetic or functional recovery. Instead, the introduction of bioelectric technology may play a role in the restoration of function in patients with neurologic deficits.
75 FR 55671 - Financial Assistance Use of Universal Identifier and Central Contractor Registration
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-14
... of Universal Identifier and Central Contractor Registration AGENCY: Office of Federal Financial...) numbers and maintain current registrations in the Central Contractor Registration (CCR) database. An... CONTRACTOR REGISTRATION Sec. Subpart A--General 25.100 Purposes of this part. 25.105 Types of awards to which...
Data mining and visualization of the Alabama accident database
DOT National Transportation Integrated Search
2000-08-01
The Alabama Department of Public Safety has developed and maintains a centralized database that contain traffic accident data collected from crash report completed by local police officers and state troopers. The Critical Analysis Reporting Environme...
Introducing GFWED: The Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.
Health information and communication system for emergency management in a developing country, Iran.
Seyedin, Seyed Hesam; Jamali, Hamid R
2011-08-01
Disasters are fortunately rare occurrences. However, accurate and timely information and communication are vital to adequately prepare individual health organizations for such events. The current article investigates the health related communication and information systems for emergency management in Iran. A mixed qualitative and quantitative methodology was used in this study. A sample of 230 health service managers was surveyed using a questionnaire and 65 semi-structured interviews were also conducted with public health and therapeutic affairs managers who were responsible for emergency management. A range of problems were identified including fragmentation of information, lack of local databases, lack of clear information strategy and lack of a formal system for logging disaster related information at regional or local level. Recommendations were made for improving the national emergency management information and communication system. The findings have implications for health organizations in developing and developed countries especially in the Middle East. Creating disaster related information databases, creating protocols and standards, setting an information strategy, training staff and hosting a center for information system in the Ministry of Health to centrally manage and share the data could improve the current information system.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
Burisch, Johan; Cukovic-Cavka, Silvija; Kaimakliotis, Ioannis; Shonová, Olga; Andersen, Vibeke; Dahlerup, Jens F; Elkjaer, Margarita; Langholz, Ebbe; Pedersen, Natalia; Salupere, Riina; Kolho, Kaija-Leena; Manninen, Pia; Lakatos, Peter Laszlo; Shuhaibar, Mary; Odes, Selwyn; Martinato, Matteo; Mihu, Ion; Magro, Fernando; Belousova, Elena; Fernandez, Alberto; Almer, Sven; Halfvarson, Jonas; Hart, Ailsa; Munkholm, Pia
2011-08-01
The EpiCom-study investigates a possible East-West-gradient in Europe in the incidence of IBD and the association with environmental factors. A secured web-based database is used to facilitate and centralize data registration. To construct and validate a web-based inception cohort database available in both English and Russian language. The EpiCom database has been constructed in collaboration with all 34 participating centers. The database was translated into Russian using forward translation, patient questionnaires were translated by simplified forward-backward translation. Data insertion implies fulfillment of international diagnostic criteria, disease activity, medical therapy, quality of life, work productivity and activity impairment, outcome of pregnancy, surgery, cancer and death. Data is secured by the WinLog3 System, developed in cooperation with the Danish Data Protection Agency. Validation of the database has been performed in two consecutive rounds, each followed by corrections in accordance with comments. The EpiCom database fulfills the requirements of the participating countries' local data security agencies by being stored at a single location. The database was found overall to be "good" or "very good" by 81% of the participants after the second validation round and the general applicability of the database was evaluated as "good" or "very good" by 77%. In the inclusion period January 1st -December 31st 2010 1336 IBD patients have been included in the database. A user-friendly, tailor-made and secure web-based inception cohort database has been successfully constructed, facilitating remote data input. The incidence of IBD in 23 European countries can be found at www.epicom-ecco.eu. Copyright © 2011 European Crohn's and Colitis Organisation. All rights reserved.
Valstad, Mathias; Alvares, Gail A; Egknud, Maiken; Matziorinis, Anna Maria; Andreassen, Ole A; Westlye, Lars T; Quintana, Daniel S
2017-07-01
There is growing interest in the role of the oxytocin system in social cognition and behavior. Peripheral oxytocin concentrations are regularly used to approximate central concentrations in psychiatric research, however, the validity of this approach is unclear. Here we conducted a pre-registered systematic search and meta-analysis of correlations between central and peripheral oxytocin concentrations. A search of databases yielded 17 eligible studies, resulting in a total sample size of 516 participants and subjects. Overall, a positive association between central and peripheral oxytocin concentrations was revealed [r=0.29, 95% CI (0.14, 0.42), p<0.0001]. This association was moderated by experimental context [Q b (4), p=0.003]. While no association was observed under basal conditions (r=0.08, p=0.31), significant associations were observed after intranasal oxytocin administration (r=0.66, p<0.0001), and after experimentally induced stress (r=0.49, p=0.001). These results indicate a coordination of central and peripheral oxytocin release after stress and after intranasal administration. Although popular, the approach of using peripheral oxytocin levels to approximate central levels under basal conditions is not supported by the present results. Copyright © 2017 Elsevier Ltd. All rights reserved.
The COMPTEL Processing and Analysis Software system (COMPASS)
NASA Astrophysics Data System (ADS)
de Vries, C. P.; COMPTEL Collaboration
The data analysis system of the gamma-ray Compton Telescope (COMPTEL) onboard the Compton-GRO spacecraft is described. A continous stream of data of the order of 1 kbytes per second is generated by the instrument. The data processing and analysis software is build around a relational database managment system (RDBMS) in order to be able to trace heritage and processing status of all data in the processing pipeline. Four institutes cooperate in this effort requiring procedures to keep local RDBMS contents identical between the sites and swift exchange of data using network facilities. Lately, there has been a gradual move of the system from central processing facilities towards clusters of workstations.
NASA Astrophysics Data System (ADS)
Alloy, A.; Gonzalez Dominguez, F.; Nila Fonseca, A. L.; Ruangsirikulchai, A.; Gentle, J. N., Jr.; Cabral, E.; Pierce, S. A.
2016-12-01
Land Subsidence as a result of groundwater extraction in central Mexico's larger urban centers initiated in the 80's as a result of population and economic growth. The city of Celaya has undergone subsidence for a few decades and a consequence is the development of an active normal fault system that affects its urban infrastructure and residential areas. To facilitate its analysis and a land use decision-making process we created an online interactive map enabling users to easily obtain information associated with land subsidence. Geological and socioeconomic data of the city was collected, including fault location, population data, and other important infrastructure and structural data has been obtained from fieldwork as part of a study abroad interchange undergraduate course. The subsidence and associated faulting hazard map was created using an InSAR derived subsidence velocity map and population data from INEGI to identify hazard zones using a subsidence gradient spatial analysis approach based on a subsidence gradient and population risk matrix. This interactive map provides a simple perspective of different vulnerable urban elements. As an accessible visualization tool, it will enhance communication between scientific and socio-economic disciplines. Our project also lays the groundwork for a future expert analysis system with an open source and easily accessible Python coded, SQLite database driven website which archives fault and subsidence data along with visual damage documentation to civil structures. This database takes field notes and provides an entry form for uniform datasets, which are used to generate a JSON. Such a database is useful because it allows geoscientists to have a centralized repository and access to their observations over time. Because of the widespread presence of the subsidence phenomena throughout cities in central Mexico, the spatial analysis has been automated using the open source software R. Raster, rgeos, shapefiles, and rgdal libraries have been used to develop the script which permits to obtain the raster maps of horizontal gradient and population density. An advantage is that this analysis can be automated for periodic updates or repurposed for similar analysis in other cities, providing an easily accessible tool for land subsidence hazard assessments.
Evolution of grid-wide access to database resident information in ATLAS using Frontier
NASA Astrophysics Data System (ADS)
Barberis, D.; Bujor, F.; de Stefano, J.; Dewhurst, A. L.; Dykstra, D.; Front, D.; Gallas, E.; Gamboa, C. F.; Luehring, F.; Walker, R.
2012-12-01
The ATLAS experiment deployed Frontier technology worldwide during the initial year of LHC collision data taking to enable user analysis jobs running on the Worldwide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken, such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond user analysis and subsystem specific tasks such as calibration and alignment, extending into production processing areas, such as initial reconstruction and trigger reprocessing. With a more robust and tuned system, we are better equipped to satisfy the still growing number of diverse clients and the demands of increasingly sophisticated processing and analysis.
Yager, Douglas B.; Hofstra, Albert H.; Granitto, Matthew
2012-01-01
This report emphasizes geographic information system analysis and the display of data stored in the legacy U.S. Geological Survey National Geochemical Database for use in mineral resource investigations. Geochemical analyses of soils, stream sediments, and rocks that are archived in the National Geochemical Database provide an extensive data source for investigating geochemical anomalies. A study area in the Egan Range of east-central Nevada was used to develop a geographic information system analysis methodology for two different geochemical datasets involving detailed (Bureau of Land Management Wilderness) and reconnaissance-scale (National Uranium Resource Evaluation) investigations. ArcGIS was used to analyze and thematically map geochemical information at point locations. Watershed-boundary datasets served as a geographic reference to relate potentially anomalous sample sites with hydrologic unit codes at varying scales. The National Hydrography Dataset was analyzed with Hydrography Event Management and ArcGIS Utility Network Analyst tools to delineate potential sediment-sample provenance along a stream network. These tools can be used to track potential upstream-sediment-contributing areas to a sample site. This methodology identifies geochemically anomalous sample sites, watersheds, and streams that could help focus mineral resource investigations in the field.
Drozda, Joseph P; Roach, James; Forsyth, Thomas; Helmering, Paul; Dummitt, Benjamin; Tcheng, James E
2018-02-01
The US Food and Drug Administration (FDA) has recognized the need to improve the tracking of medical device safety and performance, with implementation of Unique Device Identifiers (UDIs) in electronic health information as a key strategy. The FDA funded a demonstration by Mercy Health wherein prototype UDIs were incorporated into its electronic information systems. This report describes the demonstration's informatics architecture. Prototype UDIs for coronary stents were created and implemented across a series of information systems, resulting in UDI-associated data flow from manufacture through point of use to long-term follow-up, with barcode scanning linking clinical data with UDI-associated device attributes. A reference database containing device attributes and the UDI Research and Surveillance Database (UDIR) containing the linked clinical and device information were created, enabling longitudinal assessment of device performance. The demonstration included many stakeholders: multiple Mercy departments, manufacturers, health system partners, the FDA, professional societies, the National Cardiovascular Data Registry, and information system vendors. The resulting system of systems is described in detail, including entities, functions, linkage between the UDIR and proprietary systems using UDIs as the index key, data flow, roles and responsibilities of actors, and the UDIR data model. The demonstration provided proof of concept that UDIs can be incorporated into provider and enterprise electronic information systems and used as the index key to combine device and clinical data in a database useful for device evaluation. Keys to success and challenges to achieving this goal were identified. Fundamental informatics principles were central to accomplishing the system of systems model. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Lee, Seung Hoon; Jung, Kyu-Won; Ha, Johyun; Oh, Chang-Mo; Kim, Hyeseon; Park, Hyeon Jin; Yoo, Heon; Won, Young-Joo
2017-04-01
Malignant central nervous system (CNS) germ cell tumors (GCTs), although rare, are thought to occur more frequently among Asians. However, a recent population-based study revealed no differences in GCT incidence between Asians and Caucasians. Therefore, this study was conducted to determine the incidence and survival rates of CNS GCTs using the national cancer incidence database, and to compare these rates to those in the United States and Japan. We extracted CNS GCT patients diagnosed between 2005 and 2012 from the Korea Central Cancer Registry database. Age-standardized rates (ASRs), annual percentage change, and the male-female incidence rate ratios (IRRs) were calculated. To estimate the survival rate, we used data for patients diagnosed between 2005 and 2010 and followed their cases until December 31, 2013. The ASR for CNS GCT between 2005 and 2012 was 0.179 per 100,000 (95% confidence interval, 0.166 to 0.193), with an overall male-to-female (M:F) IRR of 2.95:1. However, when stratified by site, the M:F IRR was 13.62:1 for tumors of the pineal region and 1.87:1 for those located in nonpineal regions. The most frequent histologic type was germinoma (76.0%), and the most frequent location was the suprasellar region (48.5%). The 5-year survival rate of germinoma patients was 95.3%. The incidence rate of CNS GCTs in Korea during 2005-2012 was 0.179 per 100,000, which was similar to that of the Asian/Pacific Islander subpopulation in the United States. Moreover, the CNS GCT survival rate in Korea was similar to rates in Japan and the United States.
Hynes, Conor F; Ramakrishnan, Karthik; Alfares, Fahad A; Endicott, Kendal M; Hammond-Jack, Katrina; Zurakowski, David; Jonas, Richard A; Nath, Dilip S
2017-04-01
We analyzed the UNOS database to better define the risk of transmission of central nervous system (CNS) tumors from donors to adult recipients of thoracic organs. Data were procured from the Standard Transplant Analysis and Research dataset files. Donors with CNS tumors were identified, and recipients from these donors comprised the study group (Group I). The remaining recipients of organs from donors who did not have CNS tumors formed the control group (Group II). Incidence of recipient CNS tumors, donor-related malignancies, and overall survival were calculated and compared in addition to multivariable logistic regression. A cohort of 58 314 adult thoracic organ recipients were included, of which 337 received organs from donors who had documented CNS tumors (Group I). None of these recipients developed CNS tumors at a median follow-up of 72 months (IR: 30-130 months). Although overall mortality in terms of the percentage was higher in Group I than Group II (163/320=51% vs 22 123/52 691=42%), Kaplan-Meier curves indicate no significant difference in the time to death between the two groups (P=.92). There is little risk of transmission of the common nonaggressive CNS tumors to recipients of thoracic organs. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Factors Associated With Mortality of Thyroid Storm
Ono, Yosuke; Ono, Sachiko; Yasunaga, Hideo; Matsui, Hiroki; Fushimi, Kiyohide; Tanaka, Yuji
2016-01-01
Abstract Thyroid storm is a life-threatening and emergent manifestation of thyrotoxicosis. However, predictive features associated with fatal outcomes in this crisis have not been clearly defined because of its rarity. The objective of this study was to investigate the associations of patient characteristics, treatments, and comorbidities with in-hospital mortality. We conducted a retrospective observational study of patients diagnosed with thyroid storm using a national inpatient database in Japan from April 1, 2011 to March 31, 2014. Of approximately 21 million inpatients in the database, we identified 1324 patients diagnosed with thyroid storm. The mean (standard deviation) age was 47 (18) years, and 943 (71.3%) patients were female. The overall in-hospital mortality was 10.1%. The number of patients was highest in the summer season. The most common comorbidity at admission was cardiovascular diseases (46.6%). Multivariable logistic regression analyses showed that higher mortality was significantly associated with older age (≥60 years), central nervous system dysfunction at admission, nonuse of antithyroid drugs and β-blockade, and requirement for mechanical ventilation and therapeutic plasma exchange combined with hemodialysis. The present study identified clinical features associated with mortality of thyroid storm using large-scale data. Physicians should pay special attention to older patients with thyrotoxicosis and coexisting central nervous system dysfunction. Future prospective studies are needed to clarify treatment options that could improve the survival outcomes of thyroid storm. PMID:26886648
Geologic Map of the Wenatchee 1:100,000 Quadrangle, Central Washington: A Digital Database
Tabor, R.W.; Waitt, R.B.; Frizzell, V.A.; Swanson, D.A.; Byerly, G.R.; Bentley, R.D.
2005-01-01
This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Wenatchee 1:100,000 Quadrangle, Central Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.
Bertollo, David N; Alexander, Mary Jane; Shinn, Marybeth; Aybar, Jalila B
2007-06-01
This column describes the nonproprietary software Talker, used to adapt screening instruments to audio computer-assisted self-interviewing (ACASI) systems for low-literacy populations and other populations. Talker supports ease of programming, multiple languages, on-site scoring, and the ability to update a central research database. Key features include highly readable text display, audio presentation of questions and audio prompting of answers, and optional touch screen input. The scripting language for adapting instruments is briefly described as well as two studies in which respondents provided positive feedback on its use.
NASA Astrophysics Data System (ADS)
Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu
2017-10-01
We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations that should be tackled within a study area.
Li, Wan; Chen, Lina; Li, Xia; Jia, Xu; Feng, Chenchen; Zhang, Liangcai; He, Weiming; Lv, Junjie; He, Yuehan; Li, Weiguo; Qu, Xiaoli; Zhou, Yanyan; Shi, Yuchen
2013-12-01
Network motifs in central positions are considered to not only have more in-coming and out-going connections but are also localized in an area where more paths reach the networks. These central motifs have been extensively investigated to determine their consistent functions or associations with specific function categories. However, their functional potentials in the maintenance of cross-talk between different functional communities are unclear. In this paper, we constructed an integrated human signaling network from the Pathway Interaction Database. We identified 39 essential cancer-related motifs in central roles, which we called cancer-related marketing centrality motifs, using combined centrality indices on the system level. Our results demonstrated that these cancer-related marketing centrality motifs were pivotal units in the signaling network, and could mediate cross-talk between 61 biological pathways (25 could be mediated by one motif on average), most of which were cancer-related pathways. Further analysis showed that molecules of most marketing centrality motifs were in the same or adjacent subcellular localizations, such as the motif containing PI3K, PDK1 and AKT1 in the plasma membrane, to mediate signal transduction between 32 cancer-related pathways. Finally, we analyzed the pivotal roles of cancer genes in these marketing centrality motifs in the pathogenesis of cancers, and found that non-cancer genes were potential cancer-related genes.
Trend of earlier spring in central Europe continued
NASA Astrophysics Data System (ADS)
Ungersböck, Markus; Jurkovic, Anita; Koch, Elisabeth; Lipa, Wolfgang; Scheifinger, Helfried; Zach-Hermann, Susanne
2013-04-01
Modern phenology is the study of the timing of recurring biological events in the animal and plant world, the causes of their timing with regard to biotic and abiotic forces, and the interrelation among phases of the same or different species. The relationship between phenology and climate explains the importance of plant phenology for Climate Change studies. Plants require light, water, oxygen mineral nutrients and suitable temperature to grow. In temperate zones the seasonal life cycle of plants is primarily controlled by temperature and day length. Higher spring air temperatures are resulting in an earlier onset of the phenological spring in temperate and cool climate. On the other hand changes in phenology due to climate change do have impact on the climate system itself. Vegetation is a dynamic factor in the earth - climate system and has positive and negative feedback mechanisms to the biogeochemical and biogeophysical fluxes to the atmosphere Since the mid of the 1980s spring springs earlier in Europe and autumn is shifting back to the end of the year resulting in a longer vegetation period. The advancement of spring can be clearly attributed to temperature increase in the months prior to leaf unfolding and flowering, the timing of autumn is more complex and cannot easily be attributed to one or some few parameters. To demonstrate that the observed advancement of spring since the mid of 1980s is pro-longed in 2001 to 2010 and the delay of fall and the lengthening of the growing season is confirmed in the last decade we picked out several indicator plants from the PEP725 database www.pep725.eu. The PEP725 database collects data from different European network operators and thus offers a unique compilation of phenological observations; the database is regularly updated. The data follow the same classification scheme, the so called BBCH coding system so they can be compared. Lilac Syringa vulgaris, birch Betula pendula, beech Fagus and horse chestnut Aesculus hippocastanum are well represented in the PEP725 database. Flowering of lilac Syringa vulgaris is also used in the US as spring indicator . The flowering and/or leaf unfolding dates of lilac, horse chestnut show a clear advance to an earlier entrance in the last two decades 1991 to 2000 and 2001 to 2010 compared with the reference period 1961 to 1990, being more pronounced in northwestern regions of Central Europe. The growing season defined here as time span between leaf unfolding and leaf coloration of birch and beech has been lengthening up to two weeks in 2001 to 2010 compared to 1961 to 1990 in northeastern parts of Central Europe.
Jones, Jeb; Raiff, Bethany R; Dallery, Jesse
2010-08-01
Several studies have indicated that nicotine increases responding maintained by conditioned reinforcers. We assessed the effects of subcutaneous injections of 0.3 mg/kg nicotine and two nicotinic antagonists on responding maintained by conditioned and primary reinforcers and responding during extinction in 8 Long Evans rats. Mecamylamine, a central and peripheral nicotinic antagonist, and hexamethonium, a peripheral nicotinic antagonist, were administered prior to a subset of the experimental sessions. Nicotine selectively increased responding maintained by conditioned reinforcers and mecamylamine, but not hexamethonium, attenuated this effect. These results suggest that nicotine's enhancing effect on responding maintained by conditioned reinforcers is mediated in the central nervous system. PsycINFO Database Record 2010 APA, all rights reserved.
Unwin, Ian; Jansen-van der Vliet, Martine; Westenbrink, Susanne; Presser, Karl; Infanger, Esther; Porubska, Janka; Roe, Mark; Finglas, Paul
2016-02-15
The EuroFIR Document and Data Repositories are being developed as accessible collections of source documents, including grey literature, and the food composition data reported in them. These Repositories will contain source information available to food composition database compilers when selecting their nutritional data. The Document Repository was implemented as searchable bibliographic records in the Europe PubMed Central database, which links to the documents online. The Data Repository will contain original data from source documents in the Document Repository. Testing confirmed the FoodCASE food database management system as a suitable tool for the input, documentation and quality assessment of Data Repository information. Data management requirements for the input and documentation of reported analytical results were established, including record identification and method documentation specifications. Document access and data preparation using the Repositories will provide information resources for compilers, eliminating duplicated work and supporting unambiguous referencing of data contributing to their compiled data. Copyright © 2014 Elsevier Ltd. All rights reserved.
EPA Facility Registry Service (FRS): OIL
This dataset contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Oil database. The Oil database contains information on Spill Prevention, Control, and Countermeasure (SPCC) and Facility Response Plan (FRP) subject facilities to prevent and respond to oil spills. FRP facilities are referred to as substantial harm facilities due to the quantities of oil stored and facility characteristics. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to Oil facilities once the Oil data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.
A database of the coseismic effects following the 30 October 2016 Norcia earthquake in Central Italy
Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco; Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco; Agosta, F.; Alessio, G.; Alfonsi, L.; Amanti, M.; Amoroso, S.; Aringoli, D.; Auciello, E.; Azzaro, R.; Baize, S.; Bello, S.; Benedetti, L.; Bertagnini, A.; Binda, G.; Bisson, M.; Blumetti, A.M.; Bonadeo, L.; Boncio, P.; Bornemann, P.; Branca, S.; Braun, T.; Brozzetti, F.; Brunori, C.A.; Burrato, P.; Caciagli, M.; Campobasso, C.; Carafa, M.; Cinti, F.R.; Cirillo, D.; Comerci, V.; Cucci, L.; De Ritis, R.; Deiana, G.; Del Carlo, P.; Del Rio, L.; Delorme, A.; Di Manna, P.; Di Naccio, D.; Falconi, L.; Falcucci, E.; Farabollini, P.; Faure Walker, J.P.; Ferrarini, F.; Ferrario, M.F.; Ferry, M.; Feuillet, N.; Fleury, J.; Fracassi, U.; Frigerio, C.; Galluzzo, F.; Gambillara, R.; Gaudiosi, G.; Goodall, H.; Gori, S.; Gregory, L.C.; Guerrieri, L.; Hailemikael, S.; Hollingsworth, J.; Iezzi, F.; Invernizzi, C.; Jablonská, D.; Jacques, E.; Jomard, H.; Kastelic, V.; Klinger, Y.; Lavecchia, G.; Leclerc, F.; Liberi, F.; Lisi, A.; Livio, F.; Lo Sardo, L.; Malet, J.P.; Mariucci, M.T.; Materazzi, M.; Maubant, L.; Mazzarini, F.; McCaffrey, K.J.W.; Michetti, A.M.; Mildon, Z.K.; Montone, P.; Moro, M.; Nave, R.; Odin, M.; Pace, B.; Paggi, S.; Pagliuca, N.; Pambianchi, G.; Pantosti, D.; Patera, A.; Pérouse, E.; Pezzo, G.; Piccardi, L.; Pierantoni, P.P.; Pignone, M.; Pinzi, S.; Pistolesi, E.; Point, J.; Pousse, L.; Pozzi, A.; Proposito, M.; Puglisi, C.; Puliti, I.; Ricci, T.; Ripamonti, L.; Rizza, M.; Roberts, G.P.; Roncoroni, M.; Sapia, V.; Saroli, M.; Sciarra, A.; Scotti, O.; Skupinski, G.; Smedile, A.; Soquet, A.; Tarabusi, G.; Tarquini, S.; Terrana, S.; Tesson, J.; Tondi, E.; Valentini, A.; Vallone, R.; Van der Woerd, J.; Vannoli, P.; Venuti, A.; Vittori, E.; Volatili, T.; Wedmore, L.N.J.; Wilkinson, M.; Zambrano, M.
2018-01-01
We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km2. The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting. PMID:29583143
Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco
2018-03-27
We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km 2 . The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting.
A database of the coseismic effects following the 30 October 2016 Norcia earthquake in Central Italy
NASA Astrophysics Data System (ADS)
Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; de Martini, Paolo Marco; Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; de Martini, Paolo Marco; Agosta, F.; Alessio, G.; Alfonsi, L.; Amanti, M.; Amoroso, S.; Aringoli, D.; Auciello, E.; Azzaro, R.; Baize, S.; Bello, S.; Benedetti, L.; Bertagnini, A.; Binda, G.; Bisson, M.; Blumetti, A. M.; Bonadeo, L.; Boncio, P.; Bornemann, P.; Branca, S.; Braun, T.; Brozzetti, F.; Brunori, C. A.; Burrato, P.; Caciagli, M.; Campobasso, C.; Carafa, M.; Cinti, F. R.; Cirillo, D.; Comerci, V.; Cucci, L.; de Ritis, R.; Deiana, G.; Del Carlo, P.; Del Rio, L.; Delorme, A.; di Manna, P.; di Naccio, D.; Falconi, L.; Falcucci, E.; Farabollini, P.; Faure Walker, J. P.; Ferrarini, F.; Ferrario, M. F.; Ferry, M.; Feuillet, N.; Fleury, J.; Fracassi, U.; Frigerio, C.; Galluzzo, F.; Gambillara, R.; Gaudiosi, G.; Goodall, H.; Gori, S.; Gregory, L. C.; Guerrieri, L.; Hailemikael, S.; Hollingsworth, J.; Iezzi, F.; Invernizzi, C.; Jablonská, D.; Jacques, E.; Jomard, H.; Kastelic, V.; Klinger, Y.; Lavecchia, G.; Leclerc, F.; Liberi, F.; Lisi, A.; Livio, F.; Lo Sardo, L.; Malet, J. P.; Mariucci, M. T.; Materazzi, M.; Maubant, L.; Mazzarini, F.; McCaffrey, K. J. W.; Michetti, A. M.; Mildon, Z. K.; Montone, P.; Moro, M.; Nave, R.; Odin, M.; Pace, B.; Paggi, S.; Pagliuca, N.; Pambianchi, G.; Pantosti, D.; Patera, A.; Pérouse, E.; Pezzo, G.; Piccardi, L.; Pierantoni, P. P.; Pignone, M.; Pinzi, S.; Pistolesi, E.; Point, J.; Pousse, L.; Pozzi, A.; Proposito, M.; Puglisi, C.; Puliti, I.; Ricci, T.; Ripamonti, L.; Rizza, M.; Roberts, G. P.; Roncoroni, M.; Sapia, V.; Saroli, M.; Sciarra, A.; Scotti, O.; Skupinski, G.; Smedile, A.; Soquet, A.; Tarabusi, G.; Tarquini, S.; Terrana, S.; Tesson, J.; Tondi, E.; Valentini, A.; Vallone, R.; van der Woerd, J.; Vannoli, P.; Venuti, A.; Vittori, E.; Volatili, T.; Wedmore, L. N. J.; Wilkinson, M.; Zambrano, M.
2018-03-01
We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km2. The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting.
KernPaeP - a web-based pediatric palliative documentation system for home care.
Hartz, Tobias; Verst, Hendrik; Ueckert, Frank
2009-01-01
KernPaeP is a new web-based on- and offline documentation system, which has been developed for pediatric palliative care-teams supporting patient documentation and communication among health care professionals. It provides a reliable system making fast and secure home care documentation possible. KernPaeP is accessible online by registered users using any web-browser. Home care teams use an offline version of KernPaeP running on a netbook for patient documentation on site. Identifying and medical patient data are strictly separated and stored on two database servers. The system offers a stable, enhanced two-way algorithm for synchronization between the offline component and the central database servers. KernPaeP is implemented meeting highest security standards while still maintaining high usability. The web-based documentation system allows ubiquitous and immediate access to patient data. Sumptuous paper work is replaced by secure and comprehensive electronic documentation. KernPaeP helps saving time and improving the quality of documentation. Due to development in close cooperation with pediatric palliative professionals, KernPaeP fulfils the broad needs of home-care documentation. The technique of web-based online and offline documentation is in general applicable for arbitrary home care scenarios.
The IAGOS Information System: From the aircraft measurements to the users.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Thouret, Valérie; Cammas, Jean-Pierre; Petzold, Andreas; Volz-Thomas, Andreas; Gerbig, Christoph; Brenninkmeijer, Carl A. M.
2013-04-01
IAGOS (In-service Aircraft for a Global Observing System, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in-situ observations of atmospheric chemical composition throughout the troposphere and in the UTLS. It builds on almost 20 years of scientific and technological expertise gained in the research projects MOZAIC (Measurement of Ozone and Water Vapour on Airbus In-service Aircraft) and CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container). The European consortium includes research centres, universities, national weather services, airline operators and aviation industry. IAGOS consists of two complementary building blocks proving a unique global observation system: IAGOS-CORE deploys newly developed instrumentation for regular in-situ measurements of atmospheric chemical species both reactive and greenhouse gases (O3, CO, NOx, NOy, H2O, CO2, CH4), aerosols and cloud particles. In IAGOS-CARIBIC a cargo container is deployed monthly as a flying laboratory aboard one aircraft. Involved airlines ensure global operation of the network. Today, 5 aircraft are flying with the MOZAIC (3) or IAGOS-CORE (2) instrumentation namely 3 aircraft from Lufthansa, 1 from Air Namibia, and 1 from China Airlines Taiwan. A main improvement and new aspect of the IAGOS-CORE instrumentation compared to MOZAIC is to deliver the raw data in near real time (i.e. as soon as the aircraft lands data are transmitted). After a first and quick validation of the O3 and CO measurements, preliminary data are made available in the central database for both the MACC project (Monitoring Atmospheric Composition and Climate) and scientific research groups. In addition to recorded measurements, the database also contains added-value products such as meteorological information (tropopause height, air mass backtrajectories) and lagrangian model outputs (FLEXPART). Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web site: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The MOZAIC-IAGOS database contains today more than 35000 flights covering mostly the northern hemisphere mid-latitudes but with reduced representation of the Pacific region. The recently equipped China Airlines Taiwan aircraft started in July 2012 filling this gap. Future equipped aircraft scheduled in 2013 from Air France, Cathay Pacific and Iberia will cover the Asia-Oceania sector and Europe-South America transects. The database, as well as the research infrastructure itself are in continuous development and improvement. In the framework of the new starting IGAS project (IAGOS for GMES Atmospheric Service), major achievements will be reached such as metadata and formats standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC data integration within the central database, and the real-time data transmission.
Pereira, Filipa; Salvi, Mireille; Verloo, Henk
2017-08-01
The adoption of evidence-based practice (EBP) is promoted because it is widely recognized for improving the quality and safety of health care for patients, and reducing avoidable costs. Providers of primary care face numerous challenges to ensuring the effectiveness of their daily practices. Primary health care is defined as: the entry level into a health care services system, providing a first point of contact for all new needs and problems; patient-focused (not disease-oriented) care over time; care for all but the most uncommon or unusual conditions; and coordination or integration of care, regardless of where or by whom that care is delivered. Primary health care is the principal means by which to approach the main goal of any health care services system: optimization of health status. This review aims to scope publications examining beliefs, knowledge, implementation, and integration of EBPs among primary health care providers (HCPs). We will conduct a systematic scoping review of published articles in the following electronic databases, from their start dates until March 31, 2017: Medical Literature Analysis and Retrieval System Online (MEDLINE) via PubMed (from 1946), Embase (from 1947), Cumulative Index to Nursing and Allied Health Literature (CINAHL; from 1937), the Cochrane Central Register of Controlled Trials (CENTRAL; from 1992), PsycINFO (from 1806), Web of Science (from 1900), Joanna Briggs Institute (JBI) database (from 1998), Database of Abstracts of Reviews of Effects (DARE; from 1996), Trip medical database (from 1997), and relevant professional scientific journals (from their start dates). We will use the predefined search terms of, "evidence-based practice" and, "primary health care" combined with other terms, such as, "beliefs", "knowledge", "implementation", and "integration". We will also conduct a hand search of the bibliographies of all relevant articles and a search for unpublished studies using Google Scholar, ProQuest, Mednar, and WorldCat. We will consider publications in English, French, Spanish, and Portuguese. The electronic database searches were completed in April 2017. Retrieved articles are currently being screened, and the entire study is expected to be completed by November 2017. This systematic scoping review will provide a greater understanding of the beliefs, knowledge, implementation, and integration of EBPs among primary HCPs. The findings will inform clinical practice and help to draw a global picture of the EBP research topics that are relevant to primary care providers. ©Filipa Pereira, Mireille Salvi, Henk Verloo. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 01.08.2017.
Salemi, Jason L; Salinas-Miranda, Abraham A; Wilson, Roneé E; Salihu, Hamisu M
2015-01-01
Objective To describe the use of a clinically enhanced maternal and child health (MCH) database to strengthen community-engaged research activities, and to support the sustainability of data infrastructure initiatives. Data Sources/Study Setting Population-based, longitudinal database covering over 2.3 million mother–infant dyads during a 12-year period (1998–2009) in Florida. Setting: A community-based participatory research (CBPR) project in a socioeconomically disadvantaged community in central Tampa, Florida. Study Design Case study of the use of an enhanced state database for supporting CBPR activities. Principal Findings A federal data infrastructure award resulted in the creation of an MCH database in which over 92 percent of all birth certificate records for infants born between 1998 and 2009 were linked to maternal and infant hospital encounter-level data. The population-based, longitudinal database was used to supplement data collected from focus groups and community surveys with epidemiological and health care cost data on important MCH disparity issues in the target community. Data were used to facilitate a community-driven, decision-making process in which the most important priorities for intervention were identified. Conclusions Integrating statewide all-payer, hospital-based databases into CBPR can empower underserved communities with a reliable source of health data, and it can promote the sustainability of newly developed data systems. PMID:25879276
Aghayev, Emin; Staub, Lukas; Dirnhofer, Richard; Ambrose, Tony; Jackowski, Christian; Yen, Kathrin; Bolliger, Stephan; Christe, Andreas; Roeder, Christoph; Aebi, Max; Thali, Michael J
2008-04-01
Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.
Measurement Properties of the Central Sensitization Inventory: A Systematic Review.
Scerbo, Thomas; Colasurdo, Joseph; Dunn, Sally; Unger, Jacob; Nijs, Jo; Cook, Chad
2018-04-01
Central sensitization (CS) is a phenomenon associated with several medical diagnoses, including postcancer pain, low back pain, osteoarthritis, whiplash, and fibromyalgia. CS involves an amplification of neural signaling within the central nervous system that results in pain hypersensitivity. The purpose of this systematic review was to gather published studies of a widely used outcome measure (the Central Sensitization Inventory [CSI]), determine the quality of evidence these publications reported, and examine the measurement properties of the CSI. Four databases were searched for publications from 2011 (when the CSI was developed) to July 2017. The Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) checklist was applied to evaluate methodological quality and risk of bias. In instances when COSMIN did not offer a scoring system for measurement properties, qualitative analyses were performed. Fourteen studies met inclusion criteria. Quality of evidence examined with the COSMIN checklist was determined to be good to excellent for all studies for their respective measurement property reports. Interpretability measures were consistent when publications were analyzed qualitatively, and construct validity was strong when examined alongside other validated measures relating to CS. An assessment of the published measurement studies of the CSI suggest the tool generates reliable and valid data that quantify the severity of several symptoms of CS. © 2017 World Institute of Pain.
FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rathke, P.M.
1993-09-01
The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less
The MIGenAS integrated bioinformatics toolkit for web-based sequence analysis
Rampp, Markus; Soddemann, Thomas; Lederer, Hermann
2006-01-01
We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’). PMID:16844980
The Iranian National Geodata Revision Strategy and Realization Based on Geodatabase
NASA Astrophysics Data System (ADS)
Haeri, M.; Fasihi, A.; Ayazi, S. M.
2012-07-01
In recent years, using of spatial database for storing and managing spatial data has become a hot topic in the field of GIS. Accordingly National Cartographic Center of Iran (NCC) produces - from time to time - some spatial data which is usually included in some databases. One of the NCC major projects was designing National Topographic Database (NTDB). NCC decided to create National Topographic Database of the entire country-based on 1:25000 coverage maps. The standard of NTDB was published in 1994 and its database was created at the same time. In NTDB geometric data was stored in MicroStation design format (DGN) which each feature has a link to its attribute data (stored in Microsoft Access file). Also NTDB file was produced in a sheet-wise mode and then stored in a file-based style. Besides map compilation, revision of existing maps has already been started. Key problems of NCC are revision strategy, NTDB file-based style storage and operator challenges (NCC operators are almost preferred to edit and revise geometry data in CAD environments). A GeoDatabase solution for national Geodata, based on NTDB map files and operators' revision preferences, is introduced and released herein. The proposed solution extends the traditional methods to have a seamless spatial database which it can be revised in CAD and GIS environment, simultaneously. The proposed system is the common data framework to create a central data repository for spatial data storage and management.
Epstein, Richard H; Dexter, Franklin; Gratch, David M; Perino, Michael; Magrann, Jerry
2016-06-01
Accurate accounting of controlled drug transactions by inpatient hospital pharmacies is a requirement in the United States under the Controlled Substances Act. At many hospitals, manual distribution of controlled substances from pharmacies is being replaced by automated dispensing cabinets (ADCs) at the point of care. Despite the promise of improved accountability, a high prevalence (15%) of controlled substance discrepancies between ADC records and anesthesia information management systems (AIMS) has been published, with a similar incidence (15.8%; 95% confidence interval [CI], 15.3% to 16.2%) noted at our institution. Most reconciliation errors are clerical. In this study, we describe a method to capture drug transactions in near real-time from our ADCs, compare them with documentation in our AIMS, and evaluate subsequent improvement in reconciliation accuracy. ADC-controlled substance transactions are transmitted to a hospital interface server, parsed, reformatted, and sent to a software script written in Perl. The script extracts the data and writes them to a SQL Server database. Concurrently, controlled drug totals for each patient having care are documented in the AIMS and compared with the balance of the ADC transactions (i.e., vending, transferring, wasting, and returning drug). Every minute, a reconciliation report is available to anesthesia providers over the hospital Intranet from AIMS workstations. The report lists all patients, the current provider, the balance of ADC transactions, the totals from the AIMS, the difference, and whether the case is still ongoing or had concluded. Accuracy and latency of the ADC transaction capture process were assessed via simulation and by comparison with pharmacy database records, maintained by the vendor on a central server located remotely from the hospital network. For assessment of reconciliation accuracy over time, data were collected from our AIMS from January 2012 to June 2013 (Baseline), July 2013 to April 2014 (Next Day Reports), and May 2014 to September 2015 (Near Real-Time Reports) and reconciled against pharmacy records from the central pharmacy database maintained by the vendor. Control chart (batch means) methods were used between successive epochs to determine if improvement had taken place. During simulation, 100% of 10,000 messages, transmitted at a rate of 1295 per minute, were accurately captured and inserted into the database. Latency (transmission time to local database insertion time) was 46.3 ± 0.44 milliseconds (SEM). During acceptance testing, only 1 of 1384 transactions analyzed had a difference between the near real-time process and what was in the central database; this was for a "John Doe" patient whose name had been changed subsequent to data capture. Once a transaction was entered at the ADC workstation, 84.9% (n = 18 bins; 95% CI, 78.4% to 91.3%) of these transactions were available in the database on the AIMS server within 2 minutes. Within 5 minutes, 98.2% (n = 18 bins; 95% CI, 97.2% to 99.3%) were available. Among 145,642 transactions present in the central pharmacy database, only 24 were missing from the local database table (mean = 0.018%; 95% CI, 0.002% to 0.034%). Implementation of near real-time reporting improved the controlled substance reconciliation error rate compared to the previous Next Day Reports epoch, from 8.8% to 5.2% (difference = -3.6%; 95% CI, -4.3% to -2.8%; P < 10). Errors were distributed among staff, with 50% of discrepancies accounted for by 12.4% of providers and 80% accounted for by 28.5% of providers executing transactions during the Near Real-Time Reports epoch. The near real-time system for the capture of transactional data flowing over the hospital network was highly accurate, reliable, and exhibited acceptable latency. This methodology can be used to implement similar data capture for transactions from their drug ADCs. Reconciliation accuracy improved significantly as a result of implementation. Our approach may be of particular utility at facilities with limited pharmacy resources to audit anesthesia records for controlled substance administration and reconcile them against dispensing records.
A general temporal data model and the structured population event history register
Clark, Samuel J.
2010-01-01
At this time there are 37 demographic surveillance system sites active in sub-Saharan Africa, Asia and Central America, and this number is growing continuously. These sites and other longitudinal population and health research projects generate large quantities of complex temporal data in order to describe, explain and investigate the event histories of individuals and the populations they constitute. This article presents possible solutions to some of the key data management challenges associated with those data. The fundamental components of a temporal system are identified and both they and their relationships to each other are given simple, standardized definitions. Further, a metadata framework is proposed to endow this abstract generalization with specific meaning and to bind the definitions of the data to the data themselves. The result is a temporal data model that is generalized, conceptually tractable, and inherently contains a full description of the primary data it organizes. Individual databases utilizing this temporal data model can be customized to suit the needs of their operators without modifying the underlying design of the database or sacrificing the potential to transparently share compatible subsets of their data with other similar databases. A practical working relational database design based on this general temporal data model is presented and demonstrated. This work has arisen out of experience with demographic surveillance in the developing world, and although the challenges and their solutions are more general, the discussion is organized around applications in demographic surveillance. An appendix contains detailed examples and working prototype databases that implement the examples discussed in the text. PMID:20396614
An integrated chronostratigraphic data system for the twenty-first century
Sikora, P.J.; Ogg, James G.; Gary, A.; Cervato, C.; Gradstein, Felix; Huber, B.T.; Marshall, C.; Stein, J.A.; Wardlaw, B.
2006-01-01
Research in stratigraphy is increasingly multidisciplinary and conducted by diverse research teams whose members can be widely separated. This developing distributed-research process, facilitated by the availability of the Internet, promises tremendous future benefits to researchers. However, its full potential is hindered by the absence of a development strategy for the necessary infrastructure. At a National Science Foundation workshop convened in November 2001, thirty quantitative stratigraphers and database specialists from both academia and industry met to discuss how best to integrate their respective chronostratigraphic databases. The main goal was to develop a strategy that would allow efficient distribution and integration of existing data relevant to the study of geologic time. Discussions concentrated on three major themes: database standards and compatibility, strategies and tools for information retrieval and analysis of all types of global and regional stratigraphic data, and future directions for database integration and centralization of currently distributed depositories. The result was a recommendation to establish an integrated chronostratigraphic database, to be called Chronos, which would facilitate greater efficiency in stratigraphic studies (http://www.chronos.org/) . The Chronos system will both provide greater ease of data gathering and allow for multidisciplinary synergies, functions of fundamental importance in a variety of research, including time scale construction, paleoenvironmental analysis, paleoclimatology and paleoceanography. Beyond scientific research, Chronos will also provide educational and societal benefits by providing an accessible source of information of general interest (e.g., mass extinctions) and concern (e.g., climatic change). The National Science Foundation has currently funded a three-year program for implementing Chronos.. ?? 2006 Geological Society of America. All rights reserved.
Vasileiou, Eleftheria; Sheikh, Aziz; Butler, Chris; von Wissmann, Beatrix; McMenamin, Jim; Ritchie, Lewis; Tian, Lilly; Simpson, Colin
2016-03-29
Influenza vaccination is administered annually as a preventive measure against influenza infection and influenza-related complications in high-risk individuals, such as those with asthma. However, the effectiveness of influenza vaccination in people with asthma against influenza-related complications is still not well established. We will search the following databases: MEDLINE (Ovid), EMBASE (Ovid), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Central Register of Controlled Trials (CENTRAL), Scopus, Cochrane Database of Systematic Reviews (CDSR), Web of Science Core Collection, Science direct, WHO Library Information System (WHOLIS), Global Health Library and Chinese databases (CNKI, Wanfang and ChongQing VIP) from Jan 1970 to Jan 2016 for observational and experimental studies on effectiveness of influenza vaccine in people with asthma. The identification of studies will be complemented with the searching of the reference lists and citations, and contacting influenza vaccine manufacturers to identify unpublished or ongoing studies. Two reviewers will extract data and appraise the quality of each study independently. Separate meta-analyses will be undertaken for observational and experimental evidence using fixed-effect or random-effects models, as appropriate. Formal ethical approval is not required, as primary data will not be collected. The review will be disseminated in peer-reviewed publications and conference presentations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
De Feo, G; Ferrara, C
2017-08-01
This paper investigates the total and per capita environmental impacts of municipal wastewater treatment in the function of the population equivalent (PE) with a Life Cycle Assessment (LCA) approach using the processes of the Ecoinvent 2.2 database available in the software tool SimaPro v.7.3. Besides the wastewater treatment plant (WWTP), the study also considers the sewerage system. The obtained results confirm that there is a 'scale factor' for the wastewater collection and treatment even in environmental terms, in addition to the well-known scale factor in terms of management costs. Thus, the more the treatment plant size is, the less the per capita environmental impacts are. However, the Ecoinvent 2.2 database does not contain information about treatment systems with a capacity lower than 30 PE. Nevertheless, worldwide there are many sparsely populated areas, where it is not convenient to realize a unique centralized WWTP. Therefore, it would be very important to conduct an LCA study in order to compare alternative on-site small-scale systems with treatment capacity of few PE.
National information network and database system of hazardous waste management in China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Hongchang
1996-12-31
Industries in China generate large volumes of hazardous waste, which makes it essential for the nation to pay more attention to hazardous waste management. National laws and regulations, waste surveys, and manifest tracking and permission systems have been initiated. Some centralized hazardous waste disposal facilities are under construction. China`s National Environmental Protection Agency (NEPA) has also obtained valuable information on hazardous waste management from developed countries. To effectively share this information with local environmental protection bureaus, NEPA developed a national information network and database system for hazardous waste management. This information network will have such functions as information collection, inquiry,more » and connection. The long-term objective is to establish and develop a national and local hazardous waste management information network. This network will significantly help decision makers and researchers because it will be easy to obtain information (e.g., experiences of developed countries in hazardous waste management) to enhance hazardous waste management in China. The information network consists of five parts: technology consulting, import-export management, regulation inquiry, waste survey, and literature inquiry.« less
Lin, Zhe; Lin, Yongsheng
2017-09-05
The aim of this study was to explore potential crucial genes associated with the steroid-induced necrosis of femoral head (SINFH) and to provide valid biological information for further investigation of SINFH. Gene expression profile of GSE26316, generated from 3 SINFH rat samples and 3 normal rat samples were downloaded from Gene Expression Omnibus (GEO) database. The differentially expressed genes (DEGs) were identified using LIMMA package. After functional enrichment analyses of DEGs, protein-protein interaction (PPI) network and sub-PPI network analyses were conducted based on the STRING database and cytoscape. In total, 59 up-regulated DEGs and 156 downregulated DEGs were identified. The up-regulated DEGs were mainly involved in functions about immunity (e.g. Fcer1A and Il7R), and the downregulated DEGs were mainly enriched in muscle system process (e.g. Tnni2, Mylpf and Myl1). The PPI network of DEGs consisted of 123 nodes and 300 interactions. Tnni2, Mylpf, and Myl1 were the top 3 outstanding genes based on both subgraph centrality and degree centrality evaluation. These three genes interacted with each other in the network. Furthermore, the significant network module was composed of 22 downregulated genes (e.g. Tnni2, Mylpf and Myl1). These genes were mainly enriched in functions like muscle system process. The DEGs related to the regulation of immune system process (e.g. Fcer1A and Il7R), and DEGs correlated with muscle system process (e.g. Tnni2, Mylpf and Myl1) may be closely associated with the progress of SINFH, which is still needed to be confirmed by experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Jasper C.; Ma, Kevin C.; Liu, Brent J.
2008-03-01
A Data Grid for medical images has been developed at the Image Processing and Informatics Laboratory, USC to provide distribution and fault-tolerant storage of medical imaging studies across Internet2 and public domain. Although back-up policies and grid certificates guarantee privacy and authenticity of grid-access-points, there still lacks a method to guarantee the sensitive DICOM images have not been altered or corrupted during transmission across a public domain. This paper takes steps toward achieving full image transfer security within the Data Grid by utilizing DICOM image authentication and a HIPAA-compliant auditing system. The 3-D lossless digital signature embedding procedure involves a private 64 byte signature that is embedded into each original DICOM image volume, whereby on the receiving end the signature can to be extracted and verified following the DICOM transmission. This digital signature method has also been developed at the IPILab. The HIPAA-Compliant Auditing System (H-CAS) is required to monitor embedding and verification events, and allows monitoring of other grid activity as well. The H-CAS system federates the logs of transmission and authentication events at each grid-access-point and stores it into a HIPAA-compliant database. The auditing toolkit is installed at the local grid-access-point and utilizes Syslog [1], a client-server standard for log messaging over an IP network, to send messages to the H-CAS centralized database. By integrating digital image signatures and centralized logging capabilities, DICOM image integrity within the Medical Imaging and Informatics Data Grid can be monitored and guaranteed without loss to any image quality.
Analysis of the Appropriateness of the Use of Peltier Cells as Energy Sources.
Hájovský, Radovan; Pieš, Martin; Richtár, Lukáš
2016-05-25
The article describes the possibilities of using Peltier cells as an energy source to power the telemetry units, which are used in large-scale monitoring systems as central units, ensuring the collection of data from sensors, processing, and sending to the database server. The article describes the various experiments that were carried out, their progress and results. Based on experiments evaluated, the paper also discusses the possibilities of using various types depending on the temperature difference of the cold and hot sides.
Geologic map of the west-central Buffalo National River region, northern Arkansas
Hudson, Mark R.; Turner, Kenzie J.
2014-01-01
This report provides a geologic map database of the map area that improves understanding of the regional geologic framework and its influence on the regional groundwater flow system. Furthermore, additional edits were made to the Ponca and Jasper quadrangles in the following ways: new control points on important contacts were obtained using modern GPS; recent higher resolution elevation data allowed further control on placement of contacts; some new contacts were added, in particular the contact separating the upper and lower Everton Formation.
Zeidán-Chuliá, Fares; Gürsoy, Mervi; Neves de Oliveira, Ben-Hur; Özdemir, Vural; Könönen, Eija; Gürsoy, Ulvi K
2015-01-01
Periodontitis, a formidable global health burden, is a common chronic disease that destroys tooth-supporting tissues. Biomarkers of the early phase of this progressive disease are of utmost importance for global health. In this context, saliva represents a non-invasive biosample. By using systems biology tools, we aimed to (1) identify an integrated interactome between matrix metalloproteinase (MMP)-REDOX/nitric oxide (NO) and apoptosis upstream pathways of periodontal inflammation, and (2) characterize the attendant topological network properties to uncover putative biomarkers to be tested in saliva from patients with periodontitis. Hence, we first generated a protein-protein network model of interactions ("BIOMARK" interactome) by using the STRING 10 database, a search tool for the retrieval of interacting genes/proteins, with "Experiments" and "Databases" as input options and a confidence score of 0.400. Second, we determined the centrality values (closeness, stress, degree or connectivity, and betweenness) for the "BIOMARK" members by using the Cytoscape software. We found Ubiquitin C (UBC), Jun proto-oncogene (JUN), and matrix metalloproteinase-14 (MMP14) as the most central hub- and non-hub-bottlenecks among the 211 genes/proteins of the whole interactome. We conclude that UBC, JUN, and MMP14 are likely an optimal candidate group of host-derived biomarkers, in combination with oral pathogenic bacteria-derived proteins, for detecting periodontitis at its early phase by using salivary samples from patients. These findings therefore have broader relevance for systems medicine in global health as well.
MaizeGDB: New tools and resource
USDA-ARS?s Scientific Manuscript database
MaizeGDB, the USDA-ARS genetics and genomics database, is a highly curated, community-oriented informatics service to researchers focused on the crop plant and model organism Zea mays. MaizeGDB facilitates maize research by curating, integrating, and maintaining a database that serves as the central...
Biological Databases for Behavioral Neurobiology
Baker, Erich J.
2014-01-01
Databases are, at their core, abstractions of data and their intentionally derived relationships. They serve as a central organizing metaphor and repository, supporting or augmenting nearly all bioinformatics. Behavioral domains provide a unique stage for contemporary databases, as research in this area spans diverse data types, locations, and data relationships. This chapter provides foundational information on the diversity and prevalence of databases, how data structures support the various needs of behavioral neuroscience analysis and interpretation. The focus is on the classes of databases, data curation, and advanced applications in bioinformatics using examples largely drawn from research efforts in behavioral neuroscience. PMID:23195119
Alavi, Seyed Mohammad; Alavi, Leila
2016-01-01
Background: Human toxoplasmosis is an important zoonotic infection worldwide which is caused by the intracellular parasite Toxoplasma gondii (T.gondii). The aim of this study was to review briefly the general aspects of toxoplasma infection in in Iranian health system network. Methods: We searched published toxoplasmosis related articles in English databases including Science Direct, Pub Med, Scopus, Google Scholar, Magiran, Iran Medex, Iran Doc and Scientific Information Database (SID) for toxoplasmosis. Results: Out of 1267 articles from the English and Persian databases search, 40 articles were suitable with our research objectives and so were selected for the study. It is estimated that at least a third of the world human population is infected with T.gondii, suggesting it as one of the most common parasitic infections through the world. Maternal infection during pregnancy may affect dangerous outcome for the fetus, or even cause intrauterine death. Reactivation of a previous infection in immunocompromised patient such as drug induced, AIDS and organ transplantation can cause life-threating central nervous system infection. Ocular toxoplasmosis is one of the most important causes of blindness, especially in individuals with a deficient immune system. Conclusion: According to the increasing burden of toxoplasmosis on human health, the findings of this study highlight the appropriate preventive measures, diagnosis, and management of this disease. PMID:27999640
Kim, Su Ran; Lee, Hye Won; Jun, Ji Hee; Ko, Byoung-Seob
2017-03-01
Gan Mai Da Zao (GMDZ) decoction is widely used for the treatment of various diseases of the internal organ and of the central nervous system. The aim of this study is to investigate the effects of GMDZ decoction on neuropsychiatric disorders in an animal model. We searched seven databases for randomized animal studies published until April 2015: Pubmed, four Korean databases (DBpia, Oriental Medicine Advanced Searching Integrated System, Korean Studies Information Service System, and Research Information Sharing Service), and one Chinese database (China National Knowledge Infrastructure). The randomized animal studies were included if the effects of GMDZ decoction were tested on neuropsychiatric disorders. All articles were read in full and extracted predefined criteria by two independent reviewers. From a total of 258 hits, six randomized controlled animal studies were included. Five studies used a Sprague Dawley rat model for acute psychological stress, post-traumatic stress disorders, and unpredictable mild stress depression whereas one study used a Kunming mouse model for prenatal depression. The results of the studies showed that GMDZ decoction improved the related outcomes. Regardless of the dose and concentration used, GMDZ decoction significantly improved neuropsychiatric disease-related outcomes in animal models. However, additional systematic and extensive studies should be conducted to establish a strong conclusion.
DFACS - DATABASE, FORMS AND APPLICATIONS FOR CABLING AND SYSTEMS, VERSION 3.30
NASA Technical Reports Server (NTRS)
Billitti, J. W.
1994-01-01
DFACS is an interactive multi-user computer-aided engineering tool for system level electrical integration and cabling engineering. The purpose of the program is to provide the engineering community with a centralized database for entering and accessing system functional definitions, subsystem and instrument-end circuit pinout details, and harnessing data. The primary objective is to provide an instantaneous single point of information interchange, thus avoiding error-prone, time-consuming, and costly multiple-path data shuttling. The DFACS program, which is centered around a single database, has built-in menus that provide easy data input and access for all involved system, subsystem, and cabling personnel. The DFACS program allows parallel design of circuit data sheets and harness drawings. It also recombines raw information to automatically generate various project documents and drawings including the Circuit Data Sheet Index, the Electrical Interface Circuits List, Assembly and Equipment Lists, Electrical Ground Tree, Connector List, Cable Tree, Cabling Electrical Interface and Harness Drawings, Circuit Data Sheets, and ECR List of Affected Interfaces/Assemblies. Real time automatic production of harness drawings and circuit data sheets from the same data reservoir ensures instant system and cabling engineering design harmony. DFACS also contains automatic wire routing procedures and extensive error checking routines designed to minimize the possibility of engineering error. DFACS is designed to run on DEC VAX series computers under VMS using Version 6.3/01 of INGRES QUEL/OSL, a relational database system which is available through Relational Technology, Inc. The program is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. DFACS was developed in 1987 and last updated in 1990. DFACS is a copyrighted work with all copyright vested in NASA. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. INGRES QUEL/OSL is a trademark of Relational Technology, Inc.
Electronic Out-fall Inspection Application - 12007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weymouth, A Kent III; Pham, Minh; Messick, Chuck
2012-07-01
In early 2009 an exciting opportunity was presented to the Geographic Information Systems (GIS) team at the Savannah River Site (SRS). The SRS maintenance group was directed to maintain all Out-falls on Site, increasing their workload from 75 to 183 out-falls with no additional resources. The existing out-fall inspection system consisted of inspections performed manually and documented via paper trail. The inspections were closed out upon completion of activities and placed in file cabinets with no central location for tracking/trending maintenance activities. A platform for meeting new improvements required for documentation by the Department of Health and Environmental Control (DHEC)more » out-fall permits was needed to replace this current system that had been in place since the 1980's. This was accomplished by building a geographically aware electronic application that improved reliability of site out-fall maintenance and ensured consistent standards were maintained for environmental excellence and worker efficiency. Inspections are now performed via tablet and uploaded to a central point. Work orders are completed and closed either in the field using tablets (mobile application) or in their offices (via web portal) using PCs. And finally completed work orders are now stored in a central database allowing trending of maintenance activities. (authors)« less
Silver, James; Fisher, William H; Silver, Emily
2015-06-01
A history of commitment to a mental health facility disqualifies applicants for gun licenses. Identifying such a history has become increasingly complex as the locus of confinement has become more diversified and privatized. In Massachusetts, prior to 2014, the databases used to identify individuals who would be disqualified on such grounds had not contemporaneously matched the evolution of the state's mental health systems. A survey of Massachusetts police chiefs, who, as in many jurisdictions, are charged with certifying qualification, indicates that some have broadened the scope of their background checks to include the experience of their officers with respect to certain applicants. The survey identifying these patterns, conducted in 2014, preceded by one month significant legislative reforms that mandate the modification of the reporting into a centralized database commitments to all types of mental health and substance use facilities, thus allowing identification of all commitments occurring in the state. The anticipated utilization of a different database mechanism, which has parallels in several other states, potentially streamlines the background check process, but raises numerous concerns that need to be addressed in developing and using such databases. Copyright © 2015 John Wiley & Sons, Ltd.
Terminological aspects of data elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehlow, R.A.; Kenworthey, W.H. Jr.; Schuldt, R.E.
1991-01-01
The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language tomore » a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.« less
Development of a Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2/3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective- Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRA's precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models.
Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R
2015-01-01
In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.
Polvi, Anne; Linturi, Henna; Varilo, Teppo; Anttonen, Anna-Kaisa; Byrne, Myles; Fokkema, Ivo F A C; Almusa, Henrikki; Metzidis, Anthony; Avela, Kristiina; Aula, Pertti; Kestilä, Marjo; Muilu, Juha
2013-11-01
The Finnish Disease Heritage Database (FinDis) (http://findis.org) was originally published in 2004 as a centralized information resource for rare monogenic diseases enriched in the Finnish population. The FinDis database originally contained 405 causative variants for 30 diseases. At the time, the FinDis database was a comprehensive collection of data, but since 1994, a large amount of new information has emerged, making the necessity to update the database evident. We collected information and updated the database to contain genes and causative variants for 35 diseases, including six more genes and more than 1,400 additional disease-causing variants. Information for causative variants for each gene is collected under the LOVD 3.0 platform, enabling easy updating. The FinDis portal provides a centralized resource and user interface to link information on each disease and gene with variant data in the LOVD 3.0 platform. The software written to achieve this has been open-sourced and made available on GitHub (http://github.com/findis-db), allowing biomedical institutions in other countries to present their national data in a similar way, and to both contribute to, and benefit from, standardized variation data. The updated FinDis portal provides a unique resource to assist patient diagnosis, research, and the development of new cures. © 2013 WILEY PERIODICALS, INC.
Data Sharing in Astrobiology: the Astrobiology Habitable Environments Database (AHED)
NASA Astrophysics Data System (ADS)
Bristow, T.; Lafuente Valverde, B.; Keller, R.; Stone, N.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.
2016-12-01
Astrobiology is a multidisciplinary area of scientific research focused on studying the origins of life on Earth and the conditions under which life might have emerged elsewhere in the universe. The understanding of complex questions in astrobiology requires integration and analysis of data spanning a range of disciplines including biology, chemistry, geology, astronomy and planetary science. However, the lack of a centralized repository makes it difficult for astrobiology teams to share data and benefit from resultant synergies. Moreover, in recent years, federal agencies are requiring that results of any federally funded scientific research must be available and useful for the public and the science community. Astrobiology, as any other scientific discipline, needs to respond to these mandates. The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository designed to help the community by promoting the integration and sharing of all the data generated by these diverse disciplines. AHED provides public and open-access to astrobiology-related research data through a user-managed web portal implemented using the open-source software The Open Data Repository's (ODR) Data Publisher [1]. ODR-DP provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own databases or laboratory notebooks according to the characteristics of their data. AHED is then a collection of databases housed in the ODR framework that store information about samples, along with associated measurements, analyses, and contextual information about field sites where samples were collected, the instruments or equipment used for analysis, and people and institutions involved in their collection. Advanced graphics are implemented together with advanced online tools for data analysis (e.g. R, MATLAB, Project Jupyter-http://jupyter.org). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by SERA and NASA NNX11AP82A, MSL. [1] Stone et al. (2016) AGU, submitted.
Smart home technologies for health and social care support.
Martin, Suzanne; Kelly, Greg; Kernohan, W George; McCreight, Bernadette; Nugent, Christopher
2008-10-08
The integration of smart home technology to support health and social care is acquiring an increasing global significance. Provision is framed within the context of a rapidly changing population profile, which is impacting on the number of people requiring health and social care, workforce availability and the funding of healthcare systems. To explore the effectiveness of smart home technologies as an intervention for people with physical disability, cognitive impairment or learning disability, who are living at home, and to consider the impact on the individual's health status and on the financial resources of health care. We searched the following databases for primary studies: (a) the Cochrane Effective Practice and Organisation of Care (EPOC) Group Register, (b) the Cochrane Central Register of Controlled Trials (CENTRAL), (The Cochrane Library, issue 1, 2007), and (c) bibliographic databases, including MEDLINE (1966 to March 2007), EMBASE (1980 to March 2007) and CINAHL (1982 to March 2007). We also searched the Database of Abstracts of Reviews of Effectiveness (DARE). We searched the electronic databases using a strategy developed by the EPOC Trials Search Co-ordinator. We included randomised controlled trials (RCTs), quasi-experimental studies, controlled before and after studies (CBAs) and interrupted time series analyses (ITS). Participants included adults over the age of 18, living in their home in a community setting. Participants with a physical disability, dementia or a learning disability were included. The included interventions were social alarms, electronic assistive devices, telecare social alert platforms, environmental control systems, automated home environments and 'ubiquitous homes'. Outcome measures included any objective measure that records an impact on a participant's quality of life, healthcare professional workload, economic outcomes, costs to healthcare provider or costs to participant. We included measures of service satisfaction, device satisfaction and healthcare professional attitudes or satisfaction. One review author completed the search strategy with the support of a life and health sciences librarian. Two review authors independently screened titles and abstracts of results. No studies were identified which met the inclusion criteria. This review highlights the current lack of empirical evidence to support or refute the use of smart home technologies within health and social care, which is significant for practitioners and healthcare consumers.
Escobar-Rodriguez, Tomas; Bartual-Sopena, Lourdes
Enterprise resources planning (ERP) systems enable central and integrative control over all processes throughout an organisation by ensuring one data entry point and the use of a common database. T his paper analyses the attitude of healthcare personnel towards the use of an ERP system in a Spanish public hospital, identifying influencing factors. This research is based on a regression analysis of latent variables using the optimisation technique of partial least squares. We propose a research model including possible relationships among different constructs using the technology acceptance model. Our results show that the personal characteristics of potential users are key factors in explaining attitude towards using ERP systems.
Tabak, Ying P.; Johannes, Richard S.; Sun, Xiaowu; Crosby, Cynthia T.
2016-01-01
The Centers for Medicare and Medicaid Services (CMS) Hospital Compare central line-associated bloodstream infection (CLABSI) data and private databases containing new-generation intravenous needleless connector (study NC) use at the hospital level were linked. The relative risk (RR) of CLABSI associated with the study NCs was estimated, adjusting for hospital characteristics. Among 3074 eligible hospitals in the 2013 CMS database, 758 (25%) hospitals used the study NCs. The study NC hospitals had a lower unadjusted CLABSI rate (1.03 vs 1.13 CLABSIs per 1000 central line days, P < .0001) compared with comparator hospitals. The adjusted RR for CLABSI was 0.94 (95% confidence interval: 0.86, 1.02; P = .11). PMID:27598072
Tchoua, Roselyne B; Qin, Jian; Audus, Debra J; Chard, Kyle; Foster, Ian T; de Pablo, Juan
2016-09-13
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature; yet, while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our work is whether, and to what extent, the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction, while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semi-automated creation of a thermodynamic property database.
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature, yet while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our workmore » is whether and to what extent the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semiautomated creation of a thermodynamic property database.« less
Characterising droughts in Central America with uncertain hydro-meteorological data
NASA Astrophysics Data System (ADS)
Quesada Montano, B.; Westerberg, I.; Wetterhall, F.; Hidalgo, H. G.; Halldin, S.
2015-12-01
Droughts studies are scarce in Central America, a region frequently affected by droughts that cause significant socio-economic and environmental problems. Drought characterisation is important for water management and planning and can be done with the help of drought indices. Many indices have been developed in the last decades but their ability to suitably characterise droughts depends on the region of application. In Central America, comprehensive and high-quality observational networks of meteorological and hydrological data are not available. This limits the choice of drought indices and denotes the need to evaluate the quality of the data used in their calculation. This paper aimed to find which combination(s) of drought index and meteorological database are most suitable for characterising droughts in Central America. The drought indices evaluated were the standardised precipitation index (SPI), deciles (DI), the standardised precipitation evapotranspiration index (SPEI) and the effective drought index (EDI). These were calculated using precipitation data from the Climate Hazards Group Infra-Red Precipitation with station (CHIRPS), CRN073, the Climate Research Unit (CRU), ERA-Interim and station databases, and temperature data from the CRU database. All the indices were calculated at 1-, 3-, 6-, 9- and 12-month accumulation times. As a first step, the large-scale meteorological precipitation datasets were compared to have an overview of the level of agreement between them and find possible quality problems. Then, the performance of all the combinations of drought indices and meteorological datasets were evaluated against independent river discharge data, in form of the standardised streamflow index (SSI). Results revealed the large disagreement between the precipitation datasets; we found the selection of database to be more important than the selection of drought index. We found that the best combinations of meteorological drought index and database were obtained using the SPI and DI, calculated with CHIRPS and station data.
EPA Facility Registry Service (FRS): ICIS
This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Integrated Compliance Information System (ICIS). When complete, ICIS will provide a database that will contain integrated enforcement and compliance information across most of EPA's programs. The vision for ICIS is to replace EPA's independent databases that contain enforcement data with a single repository for that information. Currently, ICIS contains all Federal Administrative and Judicial enforcement actions and a subset of the Permit Compliance System (PCS), which supports the National Pollutant Discharge Elimination System (NPDES). ICIS exchanges non-sensitive enforcement/compliance activities, non-sensitive formal enforcement actions and NPDES information with FRS. This web feature service contains the enforcement/compliance activities and formal enforcement action related facilities; the NPDES facilities are contained in the PCS_NPDES web feature service. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on f
D’Haese, Pierre-François; Pallavaram, Srivatsan; Li, Rui; Remple, Michael S.; Kao, Chris; Neimat, Joseph S.; Konrad, Peter E.; Dawant, Benoit M.
2010-01-01
A number of methods have been developed to assist surgeons at various stages of deep brain stimulation (DBS) therapy. These include construction of anatomical atlases, functional databases, and electrophysiological atlases and maps. But, a complete system that can be integrated into the clinical workflow has not been developed. In this paper we present a system designed to assist physicians in pre-operative target planning, intra-operative target refinement and implantation, and post-operative DBS lead programming. The purpose of this system is to centralize the data acquired a the various stages of the procedure, reduce the amount of time needed at each stage of the therapy, and maximize the efficiency of the entire process. The system consists of a central repository (CranialVault), of a suite of software modules called CRAVE (CRAnialVault Explorer) that permit data entry and data visualization at each stage of the therapy, and of a series of algorithms that permit the automatic processing of the data. The central repository contains image data for more than 400 patients with the related pre-operative plans and position of the final implants and about 10,550 electrophysiological data points (micro-electrode recordings or responses to stimulations) recorded from 222 of these patients. The system has reached the stage of a clinical prototype that is being evaluated clinically at our institution. A preliminary quantitative validation of the planning component of the system performed on 80 patients who underwent the procedure between January 2009 and December 2009 shows that the system provides both timely and valuable information. PMID:20732828
ODIN. Online Database Information Network: ODIN Policy & Procedure Manual.
ERIC Educational Resources Information Center
Townley, Charles T.; And Others
Policies and procedures are outlined for the Online Database Information Network (ODIN), a cooperative of libraries in south-central Pennsylvania, which was organized to improve library services through technology. The first section covers organization and goals, members, and responsibilities of the administrative council and libraries. Patrons…
Catalá-López, Ferrán; Hutton, Brian; Driver, Jane A; Page, Matthew J; Ridao, Manuel; Valderas, José M; Alonso-Arroyo, Adolfo; Forés-Martos, Jaume; Martínez, Salvador; Gènova-Maleras, Ricard; Macías-Saint-Gerons, Diego; Crespo-Facorro, Benedicto; Vieta, Eduard; Valencia, Alfonso; Tabarés-Seisdedos, Rafael
2017-04-04
The objective of this study will be to synthesize the epidemiological evidence and evaluate the validity of the associations between central nervous system disorders and the risk of developing or dying from cancer. We will perform an umbrella review of systematic reviews and conduct updated meta-analyses of observational studies (cohort and case-control) investigating the association between central nervous system disorders and the risk of developing or dying from any cancer or specific types of cancer. Searches involving PubMed/MEDLINE, EMBASE, SCOPUS and Web of Science will be used to identify systematic reviews and meta-analyses of observational studies. In addition, online databases will be checked for observational studies published outside the time frames of previous reviews. Eligible central nervous system disorders will be Alzheimer's disease, anorexia nervosa, amyotrophic lateral sclerosis, autism spectrum disorders, bipolar disorder, depression, Down's syndrome, epilepsy, Huntington's disease, multiple sclerosis, Parkinson's disease and schizophrenia. The primary outcomes will be cancer incidence and cancer mortality in association with a central nervous system disorder. Secondary outcome measures will be site-specific cancer incidence and mortality, respectively. Two reviewers will independently screen references identified by the literature search, as well as potentially relevant full-text articles. Data will be abstracted, and study quality/risk of bias will be appraised by two reviewers independently. Conflicts at all levels of screening and abstraction will be resolved through discussion. Random-effects meta-analyses of primary observational studies will be conducted where appropriate. Parameters for exploring statistical heterogeneity are pre-specified. The World Cancer Research Fund (WCRF)/American Institute for Cancer Research (AICR) criteria and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach will be used for determining the quality of evidence for cancer outcomes. Our study will establish the extent of the epidemiological evidence underlying the associations between central nervous system disorders and cancer and will provide a rigorous and updated synthesis of a range of important site-specific cancer outcomes. PROSPERO CRD42016052762.
Chukmaitov, Askar; Harless, David W; Bazzoli, Gloria J; Carretta, Henry J; Siangphoe, Umaporn
2015-01-01
Implementation of accountable care organizations (ACOs) is currently underway, but there is limited empirical evidence on the merits of the ACO model. The aim was to study the associations between delivery system characteristics and ACO competencies, including centralization strategies to manage organizations, hospital integration with physicians and outpatient facilities, health information technology, infrastructure to monitor community health and report quality, and risk-adjusted 30-day all-cause mortality and case-mixed-adjusted inpatient costs for the Medicare population. Panel data (2006-2009) were assembled from Florida and multiple sources: inpatient hospital discharge, vital statistics, the American Hospital Association, the Healthcare Information and Management Systems Society, and other databases. We applied a panel study design, controlling for hospital and market characteristics. Hospitals that were in centralized health systems or became more centralized over the study period had significantly larger reductions in mortality compared with hospitals that remained freestanding. Surprisingly, tightly integrated hospital-physician arrangements were associated with increased mortality; as such, hospitals may wish to proceed cautiously when developing specific types of alignment with local physician organizations. We observed no statistically significant differences in the growth rate of costs across hospitals in any of the health systems studied relative to freestanding hospitals. Although we observed quality improvement in some organizational types, these outcome improvements were not coupled with the additional desired objective of lower cost growth. This implies that additional changes not present during our study period, potentially changes in provider payment approaches, are essential for achieving the ACO objectives of higher quality of care at lower costs. Provider organizations implementing ACOs should consider centralizing service delivery as a viable strategy to improve quality of care, although the strategy did not result in lower cost growth.
Contraception supply chain challenges: a review of evidence from low- and middle-income countries.
Mukasa, Bakali; Ali, Moazzam; Farron, Madeline; Van de Weerdt, Renee
2017-10-01
To identify and assess factors determining the functioning of supply chain systems for modern contraception in low- and middle-income countries (LMICs), and to identify challenges contributing to contraception stockouts that may lead to unmet need. Scientific databases and grey literature were searched including Database of Abstracts of Reviews of Effectiveness (DARE), PubMed, MEDLINE, POPLINE, CINAHL, Academic Search Complete, Science Direct, Web of Science, Cochrane Central, Google Scholar, WHO databases and websites of key international organisations. Studies indicated that supply chain system inefficiencies significantly affect availability of modern FP and contraception commodities in LMICs, especially in rural public facilities where distribution barriers may be acute. Supply chain failures or bottlenecks may be attributed to: weak and poorly institutionalized logistic management information systems (LMIS), poor physical infrastructures in LMICs, lack of trained and dedicated staff for supply chain management, inadequate funding, and rigid government policies on task sharing. However, there is evidence that implementing effective LMISs and involving public and private providers will distribution channels resulted in reduction in medical commodities' stockout rates. Supply chain bottlenecks contribute significantly to persistent high stockout rates for modern contraceptives in LMICs. Interventions aimed at enhancing uptake of contraceptives to reduce the problem of unmet need in LMICs should make strong commitments towards strengthening these countries' health commodities supply chain management systems. Current evidence is limited and additional, and well-designed implementation research on contraception supply chain systems is warranted to gain further understanding and insights on the determinants of supply chain bottlenecks and their impact on stockouts of contraception commodities.
Cell Phone-Based System (Chaak) for Surveillance of Immatures of Dengue Virus Mosquito Vectors
LOZANO–FUENTES, SAUL; WEDYAN, FADI; HERNANDEZ–GARCIA, EDGAR; SADHU, DEVADATTA; GHOSH, SUDIPTO; BIEMAN, JAMES M.; TEP-CHEL, DIANA; GARCÍA–REJÓN, JULIÁN E.; EISEN, LARS
2014-01-01
Capture of surveillance data on mobile devices and rapid transfer of such data from these devices into an electronic database or data management and decision support systems promote timely data analyses and public health response during disease outbreaks. Mobile data capture is used increasingly for malaria surveillance and holds great promise for surveillance of other neglected tropical diseases. We focused on mosquito-borne dengue, with the primary aims of: 1) developing and field-testing a cell phone-based system (called Chaak) for capture of data relating to the surveillance of the mosquito immature stages, and 2) assessing, in the dengue endemic setting of Mérida, México, the cost-effectiveness of this new technology versus paper-based data collection. Chaak includes a desktop component, where a manager selects premises to be surveyed for mosquito immatures, and a cell phone component, where the surveyor receives the assigned tasks and captures the data. Data collected on the cell phone can be transferred to a central database through different modes of transmission, including near-real time where data are transferred immediately (e.g., over the Internet) or by first storing data on the cell phone for future transmission. Spatial data are handled in a novel, semantically driven, geographic information system. Compared with a pen-and-paper-based method, use of Chaak improved the accuracy and increased the speed of data transcription into an electronic database. The cost-effectiveness of using the Chaak system will depend largely on the up-front cost of purchasing cell phones and the recurring cost of data transfer over a cellular network. PMID:23926788
The new geographic information system in ETVA VI.PE.
NASA Astrophysics Data System (ADS)
Xagoraris, Zafiris; Soulis, George
2016-08-01
ETVA VI.PE. S.A. is a member of the Piraeus Bank Group of Companies and its activities include designing, developing, exploiting and managing Industrial Areas throughout Greece. Inside ETVA VI.PE.'s thirty-one Industrial Parks there are currently 2,500 manufacturing companies established, with 40,000 employees and € 2.5 billion of invested funds. In each one of the industrial areas ETVA VI.PE guarantees the companies industrial lots of land (sites) with propitious building codes and complete infrastructure networks of water supply, sewerage, paved roads, power supply, communications, cleansing services, etc. The development of Geographical Information System for ETVA VI.PE.'s Industrial Parks started at the beginning of 1992 and consists of three subsystems: Cadastre, that manages the information for the land acquisition of Industrial Areas; Street Layout - Sites, that manages the sites sold to manufacturing companies; Networks, that manages the infrastructure networks (roads, water supply, sewerage etc). The mapping of each Industrial Park is made incorporating state-of-the-art photogrammetric, cartographic and surveying methods and techniques. Passing through the phases of initial design (hybrid GIS) and system upgrade (integrated Gis solution with spatial database), the system is currently operating on a new upgrade (integrated gIS solution with spatial database) that includes redesigning and merging the system's database schemas, along with the creation of central security policies, and the development of a new web GIS application for advanced data entry, highly customisable and standard reports, and dynamic interactive maps. The new GIS bring the company to advanced levels of productivity and introduce the new era for decision making and business management.
Database Resources of the BIG Data Center in 2018
Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan
2018-01-01
Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542
NEIS (NASA Environmental Information System)
NASA Technical Reports Server (NTRS)
Cook, Beth
1995-01-01
The NASA Environmental Information System (NEIS) is a tool to support the functions of the NASA Operational Environment Team (NOET). The NEIS is designed to provide a central environmental technology resource drawing on all NASA centers' capabilities, and to support program managers who must ultimately deliver hardware compliant with performance specifications and environmental requirements. The NEIS also tracks environmental regulations, usages of materials and processes, and new technology developments. It has proven to be a useful instrument for channeling information throughout the aerospace community, NASA, other federal agencies, educational institutions, and contractors. The associated paper will discuss the dynamic databases within the NEIS, and the usefulness it provides for environmental compliance efforts.
Spiders and Camels and Sybase! Oh, My!
NASA Astrophysics Data System (ADS)
Barg, Irene; Ferro, Anthony J.; Stobie, Elizabeth
The Hubble Space Telescope NICMOS Guaranteed Time Observers (GTOs) requested a means of sharing point spread function (PSF) observations. Because of the specifics of the instrument, these PSFs are very useful in the analysis of observations and can vary with the conditions on the telescope. The GTOs are geographically diverse, so a centralized processing solution would not work. The individual PSF observations were reduced by different people, at different institutions, using different reduction software. These varied observations had to be combined into a single database and linked to other information as well. The NICMOS software group at the University of Arizona developed a solution based on a World Wide Web (WWW) interface, using Perl/CGI forms to query the submitter about the PSF data to be entered. After some semi-automated sanity checks, using the FTOOLS package, the metadata are then entered into a Sybase relational database system. A user of the system can then query the database, again through a WWW interface, to locate and retrieve PSFs which may match their observations, as well as determine other information regarding the telescope conditions at the time of the observations (e.g., the breathing parameter). This presentation discusses some of the driving forces in the design, problems encountered, and the choices made. The tools used, including Sybase, Perl, FTOOLS, and WWW elements are also discussed.
47 CFR 69.306 - Central office equipment (COE).
Code of Federal Regulations, 2010 CFR
2010-10-01
... exchange carrier's signalling transfer point and the database shall be assigned to the Line Information Database subelement at § 69.120(a). All other COE Category 2 shall be assigned to the interexchange... requirement. Non-price cap local exchange carriers may use thirty percent of the interstate Local Switching...
The BioMart community portal: an innovative alternative to large, centralized data repositories
USDA-ARS?s Scientific Manuscript database
The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biologi...
48 CFR 19.703 - Eligibility requirements for participating in the program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or Small Business Administration certification status of the ANC or Indian tribe. (ii) Where one or... accessing the Central Contractor Registration (CCR) database or by contacting the SBA. Options for contacting the SBA include— (i) HUBZone small business database search application Web page at http://dsbs...
48 CFR 19.703 - Eligibility requirements for participating in the program.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or Small Business Administration certification status of the ANC or Indian tribe. (ii) Where one or... accessing the Central Contractor Registration (CCR) database or by contacting the SBA. Options for contacting the SBA include— (i) HUBZone small business database search application Web page at http://dsbs...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roger Mayes; Sera White; Randy Lee
2005-04-01
Selenium is present in waste rock/overburden that is removed during phosphate mining in southeastern Idaho. Waste rock piles or rock used during reclamation can be a source of selenium (and other metals) to streams and vegetation. Some instances (in 1996) of selenium toxicity in grazing sheep and horses caused public health and environmental concerns, leading to Idaho Department of Environmental Quality (DEQ) involvement. The Selenium Information System Project is a collaboration among the DEQ, the United States Forest Service (USFS), the Bureau of Land Management (BLM), the Idaho Mining Association (IMA), Idaho State University (ISU), and the Idaho National Laboratorymore » (INL)2. The Selenium Information System is a centralized data repository for southeastern Idaho selenium data. The data repository combines information that was previously in numerous agency, mining company, and consultants’ databases and web sites. These data include selenium concentrations in soil, water, sediment, vegetation and other environmental media, as well as comprehensive mine information. The Idaho DEQ spearheaded a selenium area-wide investigation through voluntary agreements with the mining companies and interagency participants. The Selenium Information System contains the results of that area-wide investigation, and many other background documents. As studies are conducted and remedial action decisions are made the resulting data and documentation will be stored within the information system. Potential users of the information system are agency officials, students, lawmakers, mining company personnel, teachers, researchers, and the general public. The system, available from a central website, consists of a database that contains the area-wide sampling information and an ESRI ArcIMS map server. The user can easily acquire information pertaining to the area-wide study as well as the final area-wide report. Future work on this project includes creating custom tools to increase the simplicity of the website and increasing the amount of information available from site-specific studies at 15 mines.« less
Transcriptome Analysis of the Octopus vulgaris Central Nervous System
Zhang, Xiang; Mao, Yong; Huang, Zixia; Qu, Meng; Chen, Jun; Ding, Shaoxiong; Hong, Jingni; Sun, Tiantian
2012-01-01
Background Cephalopoda are a class of Mollusca species found in all the world's oceans. They are an important model organism in neurobiology. Unfortunately, the lack of neuronal molecular sequences, such as ESTs, transcriptomic or genomic information, has limited the development of molecular neurobiology research in this unique model organism. Results With high-throughput Illumina Solexa sequencing technology, we have generated 59,859 high quality sequences from 12,918,391 paired-end reads. Using BLASTx/BLASTn, 12,227 contigs have blast hits in the Swissprot, NR protein database and NT nucleotide database with E-value cutoff 1e−5. The comparison between the Octopus vulgaris central nervous system (CNS) library and the Aplysia californica/Lymnaea stagnalis CNS ESTs library yielded 5.93%/13.45% of O. vulgaris sequences with significant matches (1e−5) using BLASTn/tBLASTx. Meanwhile the hit percentage of the recently published Schistocerca gregaria, Tilapia or Hirudo medicinalis CNS library to the O. vulgaris CNS library is 21.03%–46.19%. We constructed the Phylogenetic tree using two genes related to CNS function, Synaptotagmin-7 and Synaptophysin. Lastly, we demonstrated that O. vulgaris may have a vertebrate-like Blood-Brain Barrier based on bioinformatic analysis. Conclusion This study provides a mass of molecular information that will contribute to further molecular biology research on O. vulgaris. In our presentation of the first CNS transcriptome analysis of O. vulgaris, we hope to accelerate the study of functional molecular neurobiology and comparative evolutionary biology. PMID:22768275
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan
2005-01-01
Lightning strike location and peak current are monitored operationally in the Kennedy Space Center (KSC) Cape Canaveral Air Force Station (CCAFS) area by the Cloud to Ground Lightning Surveillance System (CGLSS). The present study compiles ten years worth of CGLSS data into a database of near strikes. Using shuffle launch platform LP39A as a convenient central point, all strikes recorded within a 20-mile radius for the period of record O R ) from January 1, 1993 to December 31,2002 were included in the subset database. Histograms and cumulative probability curves are produced for both strike intensity (peak current, in kA) and the corresponding magnetic inductance fields (in A/m). Results for the full POR have application to launch operations lightning monitoring and post-strike test procedures.
Using SIR (Scientific Information Retrieval System) for data management during a field program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichler, J.L.
As part of the US Department of Energy's program, PRocessing of Emissions by Clouds and Precipitation (PRECP), a team of scientists from four laboratories conducted a study in north central New York State, to characterize the chemical and physical processes occurring in winter storms. Sampling took place from three aircraft, two instrumented motor homes and a network of 26 surface precipitation sampling sites. Data management personnel were part of the field program, using a portable IBM PC-AT computer to enter information as it became available during the field study. Having the same database software on the field computer and onmore » the cluster of VAX 11/785 computers in use aided database development and the transfer of data between machines. 2 refs., 3 figs., 5 tabs.« less
[Virtual clinical diagnosis support system of degenerative stenosis of the lumbar spinal canal].
Shevelev, I N; Konovalov, N A; Cherkashov, A M; Molodchenkov, A A; Sharamko, T G; Asiutin, D S; Nazarenko, A G
2013-01-01
The aim of the study was to develop a virtual clinical diagnostic support system of degenerative lumbar spinal stenosis on database of spine registry. Choice of criteria's for diagnostic system was made on symptom analysis of 298 patients with lumbar spinal stenosis. Also was analysed a group of patient with disc herniation's for sensitivity and specify assessment of developed diagnostic support system. Represented clinical diagnostic support system allows identifying patients with degenerative lumbar spinal stenosis on stage of patient's primary visit. System sensitivity and specify are 90 and 71% respectively. "Online" mode of diagnostic system in structure of spine registry provides maximal availability for specialists, regardless of their locations. Development of tools "medicine 2.0" is the actual direction for carrying out further researches with which carrying out the centralized baea collection by means of specialized registers helps.
The Danish Cardiac Rehabilitation Database.
Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole
2016-01-01
The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.
Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping
2010-07-01
Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.
Follicle Online: an integrated database of follicle assembly, development and ovulation.
Hua, Juan; Xu, Bo; Yang, Yifan; Ban, Rongjun; Iqbal, Furhan; Cooke, Howard J; Zhang, Yuanwei; Shi, Qinghua
2015-01-01
Folliculogenesis is an important part of ovarian function as it provides the oocytes for female reproductive life. Characterizing genes/proteins involved in folliculogenesis is fundamental for understanding the mechanisms associated with this biological function and to cure the diseases associated with folliculogenesis. A large number of genes/proteins associated with folliculogenesis have been identified from different species. However, no dedicated public resource is currently available for folliculogenesis-related genes/proteins that are validated by experiments. Here, we are reporting a database 'Follicle Online' that provides the experimentally validated gene/protein map of the folliculogenesis in a number of species. Follicle Online is a web-based database system for storing and retrieving folliculogenesis-related experimental data. It provides detailed information for 580 genes/proteins (from 23 model organisms, including Homo sapiens, Mus musculus, Rattus norvegicus, Mesocricetus auratus, Bos Taurus, Drosophila and Xenopus laevis) that have been reported to be involved in folliculogenesis, POF (premature ovarian failure) and PCOS (polycystic ovary syndrome). The literature was manually curated from more than 43,000 published articles (till 1 March 2014). The Follicle Online database is implemented in PHP + MySQL + JavaScript and this user-friendly web application provides access to the stored data. In summary, we have developed a centralized database that provides users with comprehensive information about genes/proteins involved in folliculogenesis. This database can be accessed freely and all the stored data can be viewed without any registration. Database URL: http://mcg.ustc.edu.cn/sdap1/follicle/index.php © The Author(s) 2015. Published by Oxford University Press.
Follicle Online: an integrated database of follicle assembly, development and ovulation
Hua, Juan; Xu, Bo; Yang, Yifan; Ban, Rongjun; Iqbal, Furhan; Zhang, Yuanwei; Shi, Qinghua
2015-01-01
Folliculogenesis is an important part of ovarian function as it provides the oocytes for female reproductive life. Characterizing genes/proteins involved in folliculogenesis is fundamental for understanding the mechanisms associated with this biological function and to cure the diseases associated with folliculogenesis. A large number of genes/proteins associated with folliculogenesis have been identified from different species. However, no dedicated public resource is currently available for folliculogenesis-related genes/proteins that are validated by experiments. Here, we are reporting a database ‘Follicle Online’ that provides the experimentally validated gene/protein map of the folliculogenesis in a number of species. Follicle Online is a web-based database system for storing and retrieving folliculogenesis-related experimental data. It provides detailed information for 580 genes/proteins (from 23 model organisms, including Homo sapiens, Mus musculus, Rattus norvegicus, Mesocricetus auratus, Bos Taurus, Drosophila and Xenopus laevis) that have been reported to be involved in folliculogenesis, POF (premature ovarian failure) and PCOS (polycystic ovary syndrome). The literature was manually curated from more than 43 000 published articles (till 1 March 2014). The Follicle Online database is implemented in PHP + MySQL + JavaScript and this user-friendly web application provides access to the stored data. In summary, we have developed a centralized database that provides users with comprehensive information about genes/proteins involved in folliculogenesis. This database can be accessed freely and all the stored data can be viewed without any registration. Database URL: http://mcg.ustc.edu.cn/sdap1/follicle/index.php PMID:25931457
Dvorakova, Antonie
2016-12-01
When Hall, Yip, and Zárate (2016) suggested that cultural psychology focused on reporting differences between groups, they described comparative research conducted in other fields, including cross-cultural psychology. Cultural psychology is a different discipline with methodological approaches reflecting its dissimilar goal, which is to highlight the cultural grounding of human psychological characteristics, and ultimately make culture central to psychology in general. When multicultural psychology considers, according to Hall et al., the mechanisms of culture's influence on behavior, it treats culture the same way as cross-cultural psychology does. In contrast, cultural psychology goes beyond treating culture as an external variable when it proposes that culture and psyche are mutually constitutive. True psychology of the human experience must encompass world populations through research of the ways in which (a) historically grounded sociocultural contexts enable the distinct meaning systems that people construct, and (b) these systems simultaneously guide the human formation of the environments. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Embedded palmprint recognition system using OMAP 3530.
Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen
2012-01-01
We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the central pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance.
Devarbhavi, Harshad; Andrade, Raúl J
2014-05-01
Antimicrobial agents including antituberculosis (anti-TB) agents are the most common cause of idiosyncratic drug-induced liver injury (DILI) and drug-induced liver failure across the world. Better molecular and genetic biomarkers are acutely needed to help identify those at risk of liver injury particularly for those needing antituberculosis therapy. Some antibiotics such as amoxicillin-clavulanate and isoniazid consistently top the lists of agents in retrospective and prospective DILI databases. Central nervous system agents, particularly antiepileptics, account for the second most common class of agents implicated in DILI registries. Hepatotoxicity from older antiepileptics such as carbamazepine, phenytoin, and phenobarbital are often associated with hypersensitivity features, whereas newer antiepileptic drugs have a more favorable safety profile. Antidepressants and nonsteroidal anti-inflammatory drugs carry very low risk of significant liver injury, but their prolific use make them important causes of DILI. Early diagnosis and withdrawal of the offending agent remain the mainstays of minimizing hepatotoxicity. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity
NASA Astrophysics Data System (ADS)
Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo
2015-05-01
The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.
The Astrobiology Habitable Environments Database (AHED)
NASA Astrophysics Data System (ADS)
Lafuente, B.; Stone, N.; Downs, R. T.; Blake, D. F.; Bristow, T.; Fonda, M.; Pires, A.
2015-12-01
The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository for archiving and collaborative sharing of astrobiologically relevant data, including, morphological, textural and contextural images, chemical, biochemical, isotopic, sequencing, and mineralogical information. The aim of AHED is to foster long-term innovative research by supporting integration and analysis of diverse datasets in order to: 1) help understand and interpret planetary geology; 2) identify and characterize habitable environments and pre-biotic/biotic processes; 3) interpret returned data from present and past missions; 4) provide a citable database of NASA-funded published and unpublished data (after an agreed-upon embargo period). AHED uses the online open-source software "The Open Data Repository's Data Publisher" (ODR - http://www.opendatarepository.org) [1], which provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own database according to the characteristics of their data and the need to share data with collaborators or the broader scientific community. This platform can be also used as a laboratory notebook. The database will have the capability to import and export in a variety of standard formats. Advanced graphics will be implemented including 3D graphing, multi-axis graphs, error bars, and similar scientific data functions together with advanced online tools for data analysis (e. g. the statistical package, R). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, Mars Science Laboratory Investigations. [1] Nate et al. (2015) AGU, submitted.
Li, Cheng-Wei; Chen, Bor-Sen
2016-10-01
Recent studies have demonstrated that cell cycle plays a central role in development and carcinogenesis. Thus, the use of big databases and genome-wide high-throughput data to unravel the genetic and epigenetic mechanisms underlying cell cycle progression in stem cells and cancer cells is a matter of considerable interest. Real genetic-and-epigenetic cell cycle networks (GECNs) of embryonic stem cells (ESCs) and HeLa cancer cells were constructed by applying system modeling, system identification, and big database mining to genome-wide next-generation sequencing data. Real GECNs were then reduced to core GECNs of HeLa cells and ESCs by applying principal genome-wide network projection. In this study, we investigated potential carcinogenic and stemness mechanisms for systems cancer drug design by identifying common core and specific GECNs between HeLa cells and ESCs. Integrating drug database information with the specific GECNs of HeLa cells could lead to identification of multiple drugs for cervical cancer treatment with minimal side-effects on the genes in the common core. We found that dysregulation of miR-29C, miR-34A, miR-98, and miR-215; and methylation of ANKRD1, ARID5B, CDCA2, PIF1, STAMBPL1, TROAP, ZNF165, and HIST1H2AJ in HeLa cells could result in cell proliferation and anti-apoptosis through NFκB, TGF-β, and PI3K pathways. We also identified 3 drugs, methotrexate, quercetin, and mimosine, which repressed the activated cell cycle genes, ARID5B, STK17B, and CCL2, in HeLa cells with minimal side-effects.
Senatore, Adriano; Edirisinghe, Neranjan; Katz, Paul S.
2015-01-01
Background The sea slug Tritonia diomedea (Mollusca, Gastropoda, Nudibranchia), has a simple and highly accessible nervous system, making it useful for studying neuronal and synaptic mechanisms underlying behavior. Although many important contributions have been made using Tritonia, until now, a lack of genetic information has impeded exploration at the molecular level. Results We performed Illumina sequencing of central nervous system mRNAs from Tritonia, generating 133.1 million 100 base pair, paired-end reads. De novo reconstruction of the RNA-Seq data yielded a total of 185,546 contigs, which partitioned into 123,154 non-redundant gene clusters (unigenes). BLAST comparison with RefSeq and Swiss-Prot protein databases, as well as mRNA data from other invertebrates (gastropod molluscs: Aplysia californica, Lymnaea stagnalis and Biomphalaria glabrata; cnidarian: Nematostella vectensis) revealed that up to 76,292 unigenes in the Tritonia transcriptome have putative homologues in other databases, 18,246 of which are below a more stringent E-value cut-off of 1x10-6. In silico prediction of secreted proteins from the Tritonia transcriptome shotgun assembly (TSA) produced a database of 579 unique sequences of secreted proteins, which also exhibited markedly higher expression levels compared to other genes in the TSA. Conclusions Our efforts greatly expand the availability of gene sequences available for Tritonia diomedea. We were able to extract full length protein sequences for most queried genes, including those involved in electrical excitability, synaptic vesicle release and neurotransmission, thus confirming that the transcriptome will serve as a useful tool for probing the molecular correlates of behavior in this species. We also generated a neurosecretome database that will serve as a useful tool for probing peptidergic signalling systems in the Tritonia brain. PMID:25719197
A Solution on Identification and Rearing Files Insmallhold Pig Farming
NASA Astrophysics Data System (ADS)
Xiong, Benhai; Fu, Runting; Lin, Zhaohui; Luo, Qingyao; Yang, Liang
In order to meet government supervision of pork production safety as well as consumeŕs right to know what they buy, this study adopts animal identification, mobile PDA reader, GPRS and other information technologies, and put forward a data collection method to set up rearing files of pig in smallhold pig farming, and designs related metadata structures and its mobile database, and develops a mobile PDA embedded system to collect individual information of pig and uploading into the remote central database, and finally realizes mobile links to the a specific website. The embedded PDA can identify both a special pig bar ear tag appointed by the Ministry of Agricultural and a general data matrix bar ear tag designed by this study by mobile reader, and can record all kinds of inputs data including bacterins, feed additives, animal drugs and even some forbidden medicines and submitted them to the center database through GPRS. At the same time, the remote center database can be maintained by mobile PDA and GPRS, and finally reached pork tracking from its origin to consumption and its tracing through turn-over direction. This study has suggested a feasible technology solution how to set up network pig electronic rearing files involved smallhold pig farming based on farmer and the solution is proved practical through its application in the Tianjińs pork quality traceability system construction. Although some individual techniques have some adverse effects on the system running such as GPRS transmitting speed now, these will be resolved with the development of communication technology. The full implementation of the solution around China will supply technical supports in guaranteeing the quality and safety of pork production supervision and meet consumer demand.
Kessel, K A; Habermehl, D; Bohn, C; Jäger, A; Floca, R O; Zhang, L; Bougatf, N; Bendl, R; Debus, J; Combs, S E
2012-12-01
Especially in the field of radiation oncology, handling a large variety of voluminous datasets from various information systems in different documentation styles efficiently is crucial for patient care and research. To date, conducting retrospective clinical analyses is rather difficult and time consuming. With the example of patients with pancreatic cancer treated with radio-chemotherapy, we performed a therapy evaluation by using an analysis system connected with a documentation system. A total number of 783 patients have been documented into a professional, database-based documentation system. Information about radiation therapy, diagnostic images and dose distributions have been imported into the web-based system. For 36 patients with disease progression after neoadjuvant chemoradiation, we designed and established an analysis workflow. After an automatic registration of the radiation plans with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence. All results are saved in the database and included in statistical calculations. The main goal of using an automatic analysis tool is to reduce time and effort conducting clinical analyses, especially with large patient groups. We showed a first approach and use of some existing tools, however manual interaction is still necessary. Further steps need to be taken to enhance automation. Already, it has become apparent that the benefits of digital data management and analysis lie in the central storage of data and reusability of the results. Therefore, we intend to adapt the analysis system to other types of tumors in radiation oncology.
A central database for the Global Terrestrial Network for Permafrost (GTN-P)
NASA Astrophysics Data System (ADS)
Elger, Kirsten; Lanckman, Jean-Pierre; Lantuit, Hugues; Karlsson, Ævar Karl; Johannsson, Halldór
2013-04-01
The Global Terrestrial Network for Permafrost (GTN-P) is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). It monitors the Essential Climate Variable (ECV) permafrost that consists of permafrost temperature and active-layer thickness, with the long-term goal of obtaining a comprehensive view of the spatial structure, trends, and variability of changes in the active layer and permafrost. The network's two international monitoring components are (1) CALM (Circumpolar Active Layer Monitoring) and the (2) Thermal State of Permafrost (TSP), which is made of an extensive borehole-network covering all permafrost regions. Both programs have been thoroughly overhauled during the International Polar Year 2007-2008 and extended their coverage to provide a true circumpolar network stretching over both Hemispheres. GTN-P has gained considerable visibility in the science community in providing the baseline against which models are globally validated and incorporated in climate assessments. Yet it was until now operated on a voluntary basis, and is now being redesigned to meet the increasing expectations from the science community. To update the network's objectives and deliver the best possible products to the community, the IPA organized a workshop to define the user's needs and requirements for the production, archival, storage and dissemination of the permafrost data products it manages. From the beginning on, GNT-P data was "outfitted" with an open data policy with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been used by as many researchers as intended by the initiators of these global programs. The European Union project PAGE21 created opportunities to develop this central database for GTN-P data during the duration of the project and beyond. The database aims to be the one location where the researcher can find data, metadata and information of all relevant parameters for a specific site. Each component of the Data Management System (DMS), including parameters, data levels and metadata formats were developed in cooperation with GTN-P and the IPA. The general framework of the GTN-P DMS is based on an object-oriented model (OOM) and implemented into a spatial database. To ensure interoperability and enable potential inter-database search, field names are following international metadata standards. The outputs of the DMS will be tailored to the needs of the modeling community but also to the ones of other stakeholders. In particular, new products will be developed in partnership with the IPA and other relevant international organizations to raise awareness on permafrost in the policy-making arena. The DMS will be released to a broader public in May 2013 and we expect to have the first active data upload - via an online interface - after 2013's summer field season.
Vandevijvere, Stefanie; Williams, Rachel; Tawfiq, Essa; Swinburn, Boyd
2017-11-14
This study developed a systems-based approach (called FoodBack) to empower citizens and change agents to create healthier community food places. Formative evaluations were held with citizens and change agents in six diverse New Zealand communities, supplemented by semi-structured interviews with 85 change agents in Auckland and Hamilton in 2015-2016. The emerging system was additionally reviewed by public health experts from diverse organizations. A food environments feedback system was constructed to crowdsource key indicators of the healthiness of diverse community food places (i.e. schools, hospitals, supermarkets, fast food outlets, sport centers) and outdoor spaces (i.e. around schools), comments/pictures about barriers and facilitators to healthy eating and exemplar stories on improving the healthiness of food environments. All the information collected is centrally processed and translated into 'short' (immediate) and 'long' (after analyses) feedback loops to stimulate actions to create healthier food places. FoodBack, as a comprehensive food environment feedback system (with evidence databases and feedback and recognition processes), has the potential to increase food sovereignty, and generate a sustainable, fine-grained database of food environments for real-time food policy research. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A New Way of Thinking About Strategic Sourcing
2016-05-17
Battalion–Kandahar, 401st Army Field Support Brigade, organizes laundry at one of the battalion’s drop-off sites. (Photo by Sharonda Pearson ) By Penny...integrated process for de- termining preferred providers, and create a centralized market research database. A centralized strategic sourcing hub also
NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes . Skip Theberge (NOAA Central Library) -- Collection development, site content, image digitization, and database construction. Kristin Ward (NOAA Central Library) -- HTML page construction Without the generosity
EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...
A Data Management Framework for Real-Time Water Quality Monitoring
NASA Astrophysics Data System (ADS)
Mulyono, E.; Yang, D.; Craig, M.
2007-12-01
CSU East Bay operates two in-situ, near-real-time water quality monitoring stations in San Francisco Bay as a member of the Center for Integrative Coastal Ocean Observation, Research, and Education (CICORE) and the Central and Northern California Ocean Observing System (CeNCOOS). We have been operating stations at Dumbarton Pier and San Leandro Marina for the past two years. At each station, a sonde measures seven water quality parameters every six minutes. During the first year of operation, we retrieved data from the sondes every few weeks by visiting the sites and uploading data to a handheld logger. Last year we implemented a telemetry system utilizing a cellular CDMA modem to transfer data from the field to our data center on an hourly basis. Data from each station are initially stored in monthly files in native format. We import data from these files into a SQL database every hour. SQL is handled by Django, an open source web framework. Django provides a user- friendly web user interface (UI) to administer the data. We utilized parts of the Django UI for our database web- front, which allows users to access our database via the World Wide Web and perform basic queries. We also serve our data to other aggregating sites, including the central CICORE website and NOAA's National Data Buoy Center (NDBC). Since Django is written in Python, it allows us to integrate other Python modules into our software, such as the Matplot library for scientific graphics. We store our code in a Subversion repository, which keeps track of software revisions. Code is tested using Python's unittest and doctest modules within Django's testing facility, which warns us when our code modifications cause other parts of the software to break. During the past two years of data acquisition, we have incrementally updated our data model to accommodate changes in physical hardware, including equipment moves, instrument replacements, and sensor upgrades that affected data format.
SuperPain—a resource on pain-relieving compounds targeting ion channels
Gohlke, Björn O.; Preissner, Robert; Preissner, Saskia
2014-01-01
Pain is more than an unpleasant sensory experience associated with actual or potential tissue damage: it is the most common reason for physician consultation and often dramatically affects quality of life. The management of pain is often difficult and new targets are required for more effective and specific treatment. SuperPain (http://bioinformatics.charite.de/superpain/) is freely available database for pain-stimulating and pain-relieving compounds, which bind or potentially bind to ion channels that are involved in the transmission of pain signals to the central nervous system, such as TRPV1, TRPM8, TRPA1, TREK1, TRESK, hERG, ASIC, P2X and voltage-gated sodium channels. The database consists of ∼8700 ligands, which are characterized by experimentally measured binding affinities. Additionally, 100 000 putative ligands are included. Moreover, the database provides 3D structures of receptors and predicted ligand-binding poses. These binding poses and a structural classification scheme provide hints for the design of new analgesic compounds. A user-friendly graphical interface allows similarity searching, visualization of ligands docked into the receptor, etc. PMID:24271391
SuperPain--a resource on pain-relieving compounds targeting ion channels.
Gohlke, Björn O; Preissner, Robert; Preissner, Saskia
2014-01-01
Pain is more than an unpleasant sensory experience associated with actual or potential tissue damage: it is the most common reason for physician consultation and often dramatically affects quality of life. The management of pain is often difficult and new targets are required for more effective and specific treatment. SuperPain (http://bioinformatics.charite.de/superpain/) is freely available database for pain-stimulating and pain-relieving compounds, which bind or potentially bind to ion channels that are involved in the transmission of pain signals to the central nervous system, such as TRPV1, TRPM8, TRPA1, TREK1, TRESK, hERG, ASIC, P2X and voltage-gated sodium channels. The database consists of ∼8700 ligands, which are characterized by experimentally measured binding affinities. Additionally, 100 000 putative ligands are included. Moreover, the database provides 3D structures of receptors and predicted ligand-binding poses. These binding poses and a structural classification scheme provide hints for the design of new analgesic compounds. A user-friendly graphical interface allows similarity searching, visualization of ligands docked into the receptor, etc.
Designing Reliable Cohorts of Cardiac Patients across MIMIC and eICU
Chronaki, Catherine; Shahin, Abdullah; Mark, Roger
2016-01-01
The design of the patient cohort is an essential and fundamental part of any clinical patient study. Knowledge of the Electronic Health Records, underlying Database Management System, and the relevant clinical workflows are central to an effective cohort design. However, with technical, semantic, and organizational interoperability limitations, the database queries associated with a patient cohort may need to be reconfigured in every participating site. i2b2 and SHRINE advance the notion of patient cohorts as first class objects to be shared, aggregated, and recruited for research purposes across clinical sites. This paper reports on initial efforts to assess the integration of Medical Information Mart for Intensive Care (MIMIC) and Philips eICU, two large-scale anonymized intensive care unit (ICU) databases, using standard terminologies, i.e. LOINC, ICD9-CM and SNOMED-CT. Focus of this work is lab and microbiology observations and key demographics for patients with a primary cardiovascular ICD9-CM diagnosis. Results and discussion reflecting on reference core terminology standards, offer insights on efforts to combine detailed intensive care data from multiple ICUs worldwide. PMID:27774488
Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing
NASA Technical Reports Server (NTRS)
Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio
2013-01-01
The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.
Medical Data Transmission Using Cell Phone Networks
NASA Astrophysics Data System (ADS)
Voos, J.; Centeno, C.; Riva, G.; Zerbini, C.; Gonzalez, E.
2011-12-01
A big challenge in telemedicine systems is related to have the technical requirements needed for a successful implementation in remote locations where the available hardware and communication infrastructure is not adequate for a good medical data transmission. Despite of the wide standards availability, methodologies, applications and systems integration facilities in telemedicine, in many cases the implementation requirements are not achievable to allow the system execution in remote areas of our country. Therefore, this paper presents an alternative for the messages transmission related to medical studies using the cellular network and the standard HL7 V3 [1] for data modeling. The messages are transmitted to a web server and stored in a centralized database which allows data sharing with other specialists.
The inclusion of an online journal in PubMed central - a difficult path.
Grech, Victor
2016-01-01
The indexing of a journal in a prominent database (such as PubMed) is an important imprimatur. Journals accepted for inclusion in PubMed Central (PMC) are automatically indexed in PubMed but must provide the entire contents of their publications as XML-tagged (Extensible Markup Language) data files compliant with PubMed's document type definition (DTD). This paper describes the various attempts that the journal Images in Paediatric Cardiology made in its efforts to convert the journal contents (including all of the extant backlog) to PMC-compliant XML for archiving and indexing in PubMed after the journal was accepted for inclusion by the database.
Indian Renewable Energy and Energy Efficiency Policy Database (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushe, S.
2013-09-01
This fact sheet provides an overview of the Indian Renewable Energy and Energy Efficiency Policy Database (IREEED) developed in collaboration by the United States Department of Energy and India's Ministry of New and Renewable Energy. IREEED provides succinct summaries of India's central and state government policies and incentives related to renewable energy and energy efficiency. The online, public database was developed under the U.S.- India Energy Dialogue and the Clean Energy Solution Center.
Performance Evaluation of NoSQL Databases: A Case Study
2015-02-01
a centralized relational database. The customer decided to consider NoSQL technologies for two specific uses, namely: the primary data store for...17 custom specific 6. FU NoSQL availab data mo arking of data g a specific wo sin benchmark f hmark for tran le workload de o publish meas their...The choice of a particular NoSQL database imposes a specific distributed software architecture and data model, and is a major determinant of the
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-13
... Practice, and Local Effort (BPPPLE) Form.'' Need and Use of Information Collection: The IHS goal is to.../Disease Prevention, Nursing, and Dental) have developed a centralized program database of best practices, promising Practices and local efforts and resources. This database was previously referred as OSCAR, but the...
Standardization and structural annotation of public toxicity databases: Improving SAR capabilities and linkage to 'omics data
Ann M. Richard', ClarLynda Williams', Jamie Burch2
'Nat Health & Environ Res Lab, US EPA, RTP, NC 27711; 2EPA/NC Central Univ Student COOP Trainee<...
DOT National Transportation Integrated Search
1987-04-01
The general objective of the project was to determine the feasibility of and the general requirements for a centralized database on driver behavior and attitudes related to drunk driving and occupant restraints. Volume III is a compendium of question...
76 FR 76628 - Disclosure of Certain Credit Card Complaint Data
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-08
... collected in its central database on complaints during the preceding year.'' 12 U.S.C. 5496(c)(4). The CFPB... to mine the data for trends and patterns and to publish their conclusions would be academics and... vehicle safety complaint database that NHTSA maintains.\\10\\ \\10\\ The data is available at http://www...
Databases and Electronic Resources - Betty Petersen Memorial Library
of NOAA-Wide and Open Access Databases on the NOAA Central Library website. American Meteorological to a nonfederal website. Open Science Directory Open Science Directory contains collections of Open Access Journals (e.g. Directory of Open Access Journals) and journals in the special programs (Hinari
DOT National Transportation Integrated Search
1987-04-24
The general objective of the project was to determine the feasibility of and the general requirements for a centralized database on driver behavior and attitudes related to drunk driving and occupant restraints. Volume I assesses the extent of pertin...
Kelley, George A.; Kelley, Kristi S.
2012-01-01
Objective. The purpose of this study was to determine the database indexing of randomized controlled trials (RCTs) for a meta-analysis addressing the effects of exercise on pain and physical function in adults with arthritis and other rheumatic diseases (AORD). Methods. The number, percentage, and 95% confidence intervals (CIs) for included articles at initial and follow-up periods were calculated from PubMed, EMBASE, CENTRAL, CINAHL, SPORTDiscus, and DAO databases. The number needed to review (NNR) was also calculated along with the number of articles retrieved by expert review. Cross-referencing from reviews and included articles also occurred. Results. Thirty-four of 36 articles (94.4%, 95% CI, 81.3–99.3) were located by database searching. PubMed and CENTRAL yielded 32 of 36 articles (88.9%, 73.9–96.9). Two articles not identified in any of the other databases were found in either CINAHL or SPORTDicsus. Two other articles were located by scanning the reference lists of review articles. The NNR ranged from 2 (CINAHL) to 118 (SPORTDiscus). More articles were identified in EMBASE at follow-up (36%, 12.1–42.2 versus 86.1%, 70.5–95.3). Conclusions. Searching multiple databases and cross-referencing from reviews was important for identifying RCTs addressing the effects of exercise on pain and physical function in adults with AORD. PMID:22924128
Kelley, George A; Kelley, Kristi S
2012-01-01
Objective. The purpose of this study was to determine the database indexing of randomized controlled trials (RCTs) for a meta-analysis addressing the effects of exercise on pain and physical function in adults with arthritis and other rheumatic diseases (AORD). Methods. The number, percentage, and 95% confidence intervals (CIs) for included articles at initial and follow-up periods were calculated from PubMed, EMBASE, CENTRAL, CINAHL, SPORTDiscus, and DAO databases. The number needed to review (NNR) was also calculated along with the number of articles retrieved by expert review. Cross-referencing from reviews and included articles also occurred. Results. Thirty-four of 36 articles (94.4%, 95% CI, 81.3-99.3) were located by database searching. PubMed and CENTRAL yielded 32 of 36 articles (88.9%, 73.9-96.9). Two articles not identified in any of the other databases were found in either CINAHL or SPORTDicsus. Two other articles were located by scanning the reference lists of review articles. The NNR ranged from 2 (CINAHL) to 118 (SPORTDiscus). More articles were identified in EMBASE at follow-up (36%, 12.1-42.2 versus 86.1%, 70.5-95.3). Conclusions. Searching multiple databases and cross-referencing from reviews was important for identifying RCTs addressing the effects of exercise on pain and physical function in adults with AORD.
Corral, Juan E; Delgado Hurtado, Juan J; Domínguez, Ricardo L; Valdez de Cuéllar, Marisabel; Balmore Cruz, Carlos; Morgan, Douglas R
2015-03-01
The aims of this study were to delineate the epidemiology of gastric adenocarcinoma in Central America and contrast it with Hispanic-Latino populations in the USA. Published literature and Central America Ministry of Health databases were used as primary data sources, including national, population-based, and hospital-based registries. US data was obtained from the National Cancer Institute (NCI)-Epidemiology End Results Program (SEER) registry. Incident gastric adenocarcinoma cases were analyzed for available data between 1985 and 2011, including demographic variables and pathology information. In Central America, 19,741 incident gastric adenocarcinomas were identified. Two thirds of the cases were male, 20.5 % were under age 55, and 58.5 %were from rural areas. In the SEER database (n = 7871), 57.8 % were male and 28.9 % were under age 55. Among the US Hispanics born in Central America with gastric cancer (n = 1210), 50.3 % of cases were male and 38.1 % were under age 55. Non-cardia gastric cancer was more common in Central America (83.3 %), among US Hispanics (80.2 %), and Hispanics born in Central America (86.3 %). Cancers of the antrum were more common in Central America (73.6 %), whereas cancers of the corpus were slightly more common among US Hispanics (54.0 %). Adenocarcinoma of the diffuse subtype was relatively common, both in Central America (35.7 %) and US Hispanics (69.5 %), although Lauren classification was reported in only 50 % of cases. A significant burden of gastric adenocarcinoma is observed in Central America based upon limited available data. Differences are noted between Central America and US Hispanics. Strengthening population-based registries is needed for improved cancer control in Central America, which may have implications for the growing US Hispanic population.
Hripcsak, George
1997-01-01
Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884
Mandellos, George J; Koutelakis, George V; Panagiotakopoulos, Theodor C; Koukias, Andreas M; Koukias, Mixalis N; Lymberopoulos, Dimitrios K
2008-01-01
Early and specialized pre-hospital patient treatment improves outcome in terms of mortality and morbidity, in emergency cases. This paper focuses on the design and implementation of a telemedicine system that supports diverse types of endpoints including moving transports (MT) (ambulances, ships, planes, etc.), handheld devices and fixed units, using diverse communication networks. Target of the above telemedicine system is the pre-hospital patient treatment. While vital sign transmission is prior to other services provided by the telemedicine system (videoconference, remote management, voice calls etc.), a predefined algorithm controls provision and quality of the other services. A distributed database system controlled by a central server, aims to manage patient attributes, exams and incidents handled by different Telemedicine Coordination Centers (TCC).
NASA's MERBoard: An Interactive Collaborative Workspace Platform. Chapter 4
NASA Technical Reports Server (NTRS)
Trimble, Jay; Wales, Roxana; Gossweiler, Rich
2003-01-01
This chapter describes the ongoing process by which a multidisciplinary group at NASA's Ames Research Center is designing and implementing a large interactive work surface called the MERBoard Collaborative Workspace. A MERBoard system involves several distributed, large, touch-enabled, plasma display systems with custom MERBoard software. A centralized server and database back the system. We are continually tuning MERBoard to support over two hundred scientists and engineers during the surface operations of the Mars Exploration Rover Missions. These scientists and engineers come from various disciplines and are working both in small and large groups over a span of space and time. We describe the multidisciplinary, human-centered process by which this h4ERBoard system is being designed, the usage patterns and social interactions that we have observed, and issues we are currently facing.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Documet, Jorge; Garrison, Kathleen A.; Winstein, Carolee J.; Liu, Brent
2012-02-01
Stroke is a major cause of adult disability. The Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (I-CARE) clinical trial aims to evaluate a therapy for arm rehabilitation after stroke. A primary outcome measure is correlative analysis between stroke lesion characteristics and standard measures of rehabilitation progress, from data collected at seven research facilities across the country. Sharing and communication of brain imaging and behavioral data is thus a challenge for collaboration. A solution is proposed as a web-based system with tools supporting imaging and informatics related data. In this system, users may upload anonymized brain images through a secure internet connection and the system will sort the imaging data for storage in a centralized database. Users may utilize an annotation tool to mark up images. In addition to imaging informatics, electronic data forms, for example, clinical data forms, are also integrated. Clinical information is processed and stored in the database to enable future data mining related development. Tele-consultation is facilitated through the development of a thin-client image viewing application. For convenience, the system supports access through desktop PC, laptops, and iPAD. Thus, clinicians may enter data directly into the system via iPAD while working with participants in the study. Overall, this comprehensive imaging informatics system enables users to collect, organize and analyze stroke cases efficiently.
Analysis of the Appropriateness of the Use of Peltier Cells as Energy Sources
Hájovský, Radovan; Pieš, Martin; Richtár, Lukáš
2016-01-01
The article describes the possibilities of using Peltier cells as an energy source to power the telemetry units, which are used in large-scale monitoring systems as central units, ensuring the collection of data from sensors, processing, and sending to the database server. The article describes the various experiments that were carried out, their progress and results. Based on experiments evaluated, the paper also discusses the possibilities of using various types depending on the temperature difference of the cold and hot sides. PMID:27231913
DISCOS- Current Status and Future Developments
NASA Astrophysics Data System (ADS)
Flohrer, T.; Lemmens, S.; Bastida Virgili, B.; Krag, H.; Klinkrad, H.; Parrilla, E.; Sanchez, N.; Oliveira, J.; Pina, F.
2013-08-01
We present ESA's Database and Information System Characterizing Objects in Space (DISCOS). DISCOS not only plays an essential role in the collision avoidance and re-entry prediction services provided by ESA's Space Debris Office, it is also providing input to numerous and very differently scoped engineering activities, within ESA and throughout industry. We introduce the central functionalities of DISCOS, present the available reporting capabilities, and describe selected data modelling features. Finally, we revisit the developments of the recent years and take a sneak preview of the on-going replacement of DISCOS web front-end.
Introducing the Global Fire WEather Database (GFWED)
NASA Astrophysics Data System (ADS)
Field, R. D.
2015-12-01
The Canadian Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations beginning in 1980 called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5° latitude by 2/3° longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded datasets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA-based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DC=1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously-identified in MERRA's precipitation and reinforce the need to consider alternative sources of precipitation data. GFWED is being used by researchers around the world for analyzing historical relationships between fire weather and fire activity at large scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models. These applications will be discussed. More information on GFWED can be found at http://data.giss.nasa.gov/impacts/gfwed/
Use of the Decision Support System for VA cost-effectiveness research.
Barnett, P G; Rodgers, J H
1999-04-01
The Department of Veterans Affairs is adopting the Decision Support System (DSS), computer software and databases which include a cost-accounting system which determines the cost of health care products and patient encounters. A system for providing cost data for cost-effectiveness analysis should be provide valid, detailed, and comprehensive data that can be aggregated. The design of DSS is described and compared with those criteria. Utilization data from DSS was compared with other VA utilization data. Aggregate DSS cost data from 35 medical centers was compared with relative resource weights developed for the Medicare program. Data on hospital stays at 3 facilities found that 3.7% of the stays in DSS were not in the VA discharge database, whereas 7.6% of the stays in the discharge data were not in DSS. DSS reported between 68.8% and 97.1% of the outpatient encounters reported by six facilities in the ambulatory care data base. Relative weights for each Diagnosis Related Group based on DSS data from 35 VA facilities correlated with Medicare weights (correlation coefficient of .853). DSS will be useful for research if certain problems are overcome. It is difficult to distinguish long-term from acute hospital care. VA does not have a complete database of all inpatient procedures, so DSS has not assigned them a specific cost. The authority to access encounter-level DSS data needs to be centralized. Researchers can provide the feedback needed to improve DSS cost estimates. A comprehensive encounter-level extract would facilitate use of DSS for research.
Educational and Skills-Based Interventions to Prevent Relationship Violence in Young People
ERIC Educational Resources Information Center
Fellmeth, Gracia; Heffernan, Catherine; Nurse, Joanna; Habibula, Shakiba; Sethi, Dinesh
2015-01-01
Objectives: To assess the efficacy of educational and skills-based interventions to prevent relationship and dating violence in adolescents and young adults. Methods: We searched Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, CINAHL, PsycINFO, and other databases for randomized, cluster-randomized, and quasi-randomized…
VIOLIN: vaccine investigation and online information network.
Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun
2008-01-01
Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.
Leem, Jungtae; Lee, Seunghoon; Park, Yeoncheol; Seo, Byung-Kwan; Cho, Yeeun; Kang, Jung Won; Lee, Yoon Jae; Ha, In-Hyuk; Lee, Hyun-Jong; Kim, Eun-Jung; Lee, Sanghoon; Nam, Dongwoo
2017-06-23
Many patients experience acute lower back pain that becomes chronic pain. The proportion of patients using complementary and alternative medicine to treat lower back is increasing. Even though several moxibustion clinical trials for lower back pain have been conducted, the effectiveness and safety of moxibustion intervention is controversial. The purpose of this study protocol for a systematic review is to evaluate the effectiveness and safety of moxibustion treatment for non-specific lower back pain patients. We will conduct an electronic search of several databases from their inception to May 2017, including Embase, PubMed, Cochrane Central Register of Controlled Trial, Allied and Complementary Medicine Database, Wanfang Database, Chongqing VIP Chinese Science and Technology Periodical Database, China National Knowledge Infrastructure Database, Korean Medical Database, Korean Studies Information Service System, National Discovery for Science Leaders, Oriental Medicine Advanced Searching Integrated System, the Korea Institute of Science and Technology, and KoreaMed. Randomised controlled trials investigating any type of moxibustion treatment will be included. The primary outcome will be pain intensity and functional status/disability due to lower back pain. The secondary outcome will be a global measurement of recovery or improvement, work-related outcomes, radiographic improvement of structure, quality of life, and adverse events (presence or absence). Risk ratio or mean differences with a 95% confidence interval will be used to show the effect of moxibustion therapy when it is possible to conduct a meta-analysis. This review will be published in a peer-reviewed journal and will be presented at an international academic conference for dissemination. Our results will provide current evidence of the effectiveness and safety of moxibustion treatment in non-specific lower back pain patients, and thus will be beneficial to patients, practitioners, and policymakers. CRD42016047468 in PROSPERO 2016. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
D'Haese, Pierre-François; Pallavaram, Srivatsan; Li, Rui; Remple, Michael S; Kao, Chris; Neimat, Joseph S; Konrad, Peter E; Dawant, Benoit M
2012-04-01
A number of methods have been developed to assist surgeons at various stages of deep brain stimulation (DBS) therapy. These include construction of anatomical atlases, functional databases, and electrophysiological atlases and maps. But, a complete system that can be integrated into the clinical workflow has not been developed. In this paper we present a system designed to assist physicians in pre-operative target planning, intra-operative target refinement and implantation, and post-operative DBS lead programming. The purpose of this system is to centralize the data acquired a the various stages of the procedure, reduce the amount of time needed at each stage of the therapy, and maximize the efficiency of the entire process. The system consists of a central repository (CranialVault), of a suite of software modules called CRAnialVault Explorer (CRAVE) that permit data entry and data visualization at each stage of the therapy, and of a series of algorithms that permit the automatic processing of the data. The central repository contains image data for more than 400 patients with the related pre-operative plans and position of the final implants and about 10,550 electrophysiological data points (micro-electrode recordings or responses to stimulations) recorded from 222 of these patients. The system has reached the stage of a clinical prototype that is being evaluated clinically at our institution. A preliminary quantitative validation of the planning component of the system performed on 80 patients who underwent the procedure between January 2009 and December 2009 shows that the system provides both timely and valuable information. Copyright © 2010 Elsevier B.V. All rights reserved.
Electronic data collection for clinical trials using tablet and handheld PCs
NASA Astrophysics Data System (ADS)
Alaoui, Adil; Vo, Minh; Patel, Nikunj; McCall, Keith; Lindisch, David; Watson, Vance; Cleary, Kevin
2005-04-01
This paper describes a system that uses electronic forms to collect patient and procedure data for clinical trials. During clinical trials, patients are typically required to provide background information such as demographics and medical history, as well as review and complete any consent forms. Physicians or their assistants then usually have additional forms for recording technical data from the procedure and for gathering follow-up information from patients after completion of the procedure. This approach can lead to substantial amounts of paperwork to collect and manage over the course of a clinical trial with a large patient base. By using e-forms instead, data can be transmitted to a single, centralized database, reducing the problem of managing paper forms. Additionally, the system can provide a means for relaying information from the database to the physician on his/her portable wireless device, such as to alert the physician when a patient has completed the pre-procedure forms and is ready to begin the procedure. This feature could improve the workflow in busy clinical practices. In the future, the system could be expanded so physicians could use their portable wireless device to pull up entire hospital records and view other pre-procedure data and patient images.
NASA Astrophysics Data System (ADS)
Sheldon, W.
2013-12-01
Managing data for a large, multidisciplinary research program such as a Long Term Ecological Research (LTER) site is a significant challenge, but also presents unique opportunities for data stewardship. LTER research is conducted within multiple organizational frameworks (i.e. a specific LTER site as well as the broader LTER network), and addresses both specific goals defined in an NSF proposal as well as broader goals of the network; therefore, every LTER data can be linked to rich contextual information to guide interpretation and comparison. The challenge is how to link the data to this wealth of contextual metadata. At the Georgia Coastal Ecosystems LTER we developed an integrated information management system (GCE-IMS) to manage, archive and distribute data, metadata and other research products as well as manage project logistics, administration and governance (figure 1). This system allows us to store all project information in one place, and provide dynamic links through web applications and services to ensure content is always up to date on the web as well as in data set metadata. The database model supports tracking changes over time in personnel roles, projects and governance decisions, allowing these databases to serve as canonical sources of project history. Storing project information in a central database has also allowed us to standardize both the formatting and content of critical project information, including personnel names, roles, keywords, place names, attribute names, units, and instrumentation, providing consistency and improving data and metadata comparability. Lookup services for these standard terms also simplify data entry in web and database interfaces. We have also coupled the GCE-IMS to our MATLAB- and Python-based data processing tools (i.e. through database connections) to automate metadata generation and packaging of tabular and GIS data products for distribution. Data processing history is automatically tracked throughout the data lifecycle, from initial import through quality control, revision and integration by our data processing system (GCE Data Toolbox for MATLAB), and included in metadata for versioned data products. This high level of automation and system integration has proven very effective in managing the chaos and scalability of our information management program.
Cardiac registers: the adult cardiac surgery register.
Bridgewater, Ben
2010-09-01
AIMS OF THE SCTS ADULT CARDIAC SURGERY DATABASE: To measure the quality of care of adult cardiac surgery in GB and Ireland and provide information for quality improvement and research. Feedback of structured data to hospitals, publication of named hospital and surgeon mortality data, publication of benchmarked activity and risk adjusted clinical outcomes through intermittent comprehensive database reports, annual screening of all hospital and individual surgeon risk adjusted mortality rates by the professional society. All NHS hospitals in England, Scotland and Wales with input from some private providers and hospitals in Ireland. 1994-ongoing. Consecutive patients, unconsented. Current number of records: 400000. Adult cardiac surgery operations excluding cardiac transplantation and ventricular assist devices. 129 fields covering demographic factors, pre-operative risk factors, operative details and post-operative in-hospital outcomes. Entry onto local software systems by direct key board entry or subsequent transcription from paper records, with subsequent electronic upload to the central cardiac audit database. Non-financial incentives at hospital level. Local validation processes exist in the hospitals. There is currently no external data validation process. All cause mortality is obtained through linkage with Office for National Statistics. No other linkages exist at present. Available for research and audit by application to the SCTS database committee at http://www.scts.org.
WormQTLHD—a web database for linking human disease to natural variation data in C. elegans
van der Velde, K. Joeri; de Haan, Mark; Zych, Konrad; Arends, Danny; Snoek, L. Basten; Kammenga, Jan E.; Jansen, Ritsert C.; Swertz, Morris A.; Li, Yang
2014-01-01
Interactions between proteins are highly conserved across species. As a result, the molecular basis of multiple diseases affecting humans can be studied in model organisms that offer many alternative experimental opportunities. One such organism—Caenorhabditis elegans—has been used to produce much molecular quantitative genetics and systems biology data over the past decade. We present WormQTLHD (Human Disease), a database that quantitatively and systematically links expression Quantitative Trait Loci (eQTL) findings in C. elegans to gene–disease associations in man. WormQTLHD, available online at http://www.wormqtl-hd.org, is a user-friendly set of tools to reveal functionally coherent, evolutionary conserved gene networks. These can be used to predict novel gene-to-gene associations and the functions of genes underlying the disease of interest. We created a new database that links C. elegans eQTL data sets to human diseases (34 337 gene–disease associations from OMIM, DGA, GWAS Central and NHGRI GWAS Catalogue) based on overlapping sets of orthologous genes associated to phenotypes in these two species. We utilized QTL results, high-throughput molecular phenotypes, classical phenotypes and genotype data covering different developmental stages and environments from WormQTL database. All software is available as open source, built on MOLGENIS and xQTL workbench. PMID:24217915
WormQTLHD--a web database for linking human disease to natural variation data in C. elegans.
van der Velde, K Joeri; de Haan, Mark; Zych, Konrad; Arends, Danny; Snoek, L Basten; Kammenga, Jan E; Jansen, Ritsert C; Swertz, Morris A; Li, Yang
2014-01-01
Interactions between proteins are highly conserved across species. As a result, the molecular basis of multiple diseases affecting humans can be studied in model organisms that offer many alternative experimental opportunities. One such organism-Caenorhabditis elegans-has been used to produce much molecular quantitative genetics and systems biology data over the past decade. We present WormQTL(HD) (Human Disease), a database that quantitatively and systematically links expression Quantitative Trait Loci (eQTL) findings in C. elegans to gene-disease associations in man. WormQTL(HD), available online at http://www.wormqtl-hd.org, is a user-friendly set of tools to reveal functionally coherent, evolutionary conserved gene networks. These can be used to predict novel gene-to-gene associations and the functions of genes underlying the disease of interest. We created a new database that links C. elegans eQTL data sets to human diseases (34 337 gene-disease associations from OMIM, DGA, GWAS Central and NHGRI GWAS Catalogue) based on overlapping sets of orthologous genes associated to phenotypes in these two species. We utilized QTL results, high-throughput molecular phenotypes, classical phenotypes and genotype data covering different developmental stages and environments from WormQTL database. All software is available as open source, built on MOLGENIS and xQTL workbench.
GenoQuery: a new querying module for functional annotation in a genomic warehouse
Lemoine, Frédéric; Labedan, Bernard; Froidevaux, Christine
2008-01-01
Motivation: We have to cope with both a deluge of new genome sequences and a huge amount of data produced by high-throughput approaches used to exploit these genomic features. Crossing and comparing such heterogeneous and disparate data will help improving functional annotation of genomes. This requires designing elaborate integration systems such as warehouses for storing and querying these data. Results: We have designed a relational genomic warehouse with an original multi-layer architecture made of a databases layer and an entities layer. We describe a new querying module, GenoQuery, which is based on this architecture. We use the entities layer to define mixed queries. These mixed queries allow searching for instances of biological entities and their properties in the different databases, without specifying in which database they should be found. Accordingly, we further introduce the central notion of alternative queries. Such queries have the same meaning as the original mixed queries, while exploiting complementarities yielded by the various integrated databases of the warehouse. We explain how GenoQuery computes all the alternative queries of a given mixed query. We illustrate how useful this querying module is by means of a thorough example. Availability: http://www.lri.fr/~lemoine/GenoQuery/ Contact: chris@lri.fr, lemoine@lri.fr PMID:18586731
Data Mining on Distributed Medical Databases: Recent Trends and Future Directions
NASA Astrophysics Data System (ADS)
Atilgan, Yasemin; Dogan, Firat
As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.
The Importance of Biological Databases in Biological Discovery.
Baxevanis, Andreas D; Bateman, Alex
2015-06-19
Biological databases play a central role in bioinformatics. They offer scientists the opportunity to access a wide variety of biologically relevant data, including the genomic sequences of an increasingly broad range of organisms. This unit provides a brief overview of major sequence databases and portals, such as GenBank, the UCSC Genome Browser, and Ensembl. Model organism databases, including WormBase, The Arabidopsis Information Resource (TAIR), and those made available through the Mouse Genome Informatics (MGI) resource, are also covered. Non-sequence-centric databases, such as Online Mendelian Inheritance in Man (OMIM), the Protein Data Bank (PDB), MetaCyc, and the Kyoto Encyclopedia of Genes and Genomes (KEGG), are also discussed. Copyright © 2015 John Wiley & Sons, Inc.
Workflow and web application for annotating NCBI BioProject transcriptome data
Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A.; Barrero, Luz S.; Landsman, David
2017-01-01
Abstract The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. Database URL: http://www.ncbi.nlm.nih.gov/projects/physalis/ PMID:28605765
Assistive technology for ultrasound-guided central venous catheter placement.
Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana
2018-01-01
This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.
Distributed computing for macromolecular crystallography
Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Ballard, Charles
2018-01-01
Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community. PMID:29533240
Optimization of analytical laboratory work using computer networking and databasing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upp, D.L.; Metcalf, R.A.
1996-06-01
The Health Physics Analysis Laboratory (HPAL) performs around 600,000 analyses for radioactive nuclides each year at Los Alamos National Laboratory (LANL). Analysis matrices vary from nasal swipes, air filters, work area swipes, liquids, to the bottoms of shoes and cat litter. HPAL uses 8 liquid scintillation counters, 8 gas proportional counters, and 9 high purity germanium detectors in 5 laboratories to perform these analyses. HPAL has developed a computer network between the labs and software to produce analysis results. The software and hardware package includes barcode sample tracking, log-in, chain of custody, analysis calculations, analysis result printing, and utility programs.more » All data are written to a database, mirrored on a central server, and eventually written to CD-ROM to provide for online historical results. This system has greatly reduced the work required to provide for analysis results as well as improving the quality of the work performed.« less
Distributed computing for macromolecular crystallography.
Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles
2018-02-01
Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.
2014-06-01
central location. Each of the SQLite databases are converted and stored in one MySQL database and the pcap files are parsed to extract call information...from the specific communications applications used during the experiment. This extracted data is then stored in the same MySQL database. With all...rhythm of the event. Figure 3 demonstrates the application usage over the course of the experiment for the EXDIR. As seen, the EXDIR spent the majority
EPA Facility Registry Service (FRS): TRI
This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Toxic Release Inventory (TRI) System. TRI is a publicly available EPA database reported annually by certain covered industry groups, as well as federal facilities. It contains information about more than 650 toxic chemicals that are being used, manufactured, treated, transported, or released into the environment, and includes information about waste management and pollution prevention activities. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to TRI facilities once the TRI data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.
Database Resources of the BIG Data Center in 2018.
2018-01-04
The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... Form (OMB Form No. 0917-0034). Need and Use of Information Collection: The IHS goal is to raise the... Prevention (HP/DP), Nursing, and Dental) have developed a centralized program database of Best/Promising Practices and Local Efforts (BPPPLE) and resources. The purpose of this collection is to develop a database...
Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter
2009-01-01
A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...
77 FR 38292 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-27
... purchase from the HCUP Central Distributor for data years beginning in 1988. (2) The Kids' Inpatient Database (KID) is the only all-payer inpatient care database for children in the United States. The KID was... child health issues. The KID contains a sample of over 3 million discharges for children age 20 and...
Materials And Processes Technical Information System (MAPTIS) LDEF materials database
NASA Technical Reports Server (NTRS)
Davis, John M.; Strickland, John W.
1992-01-01
The Materials and Processes Technical Information System (MAPTIS) is a collection of materials data which was computerized and is available to engineers in the aerospace community involved in the design and development of spacecraft and related hardware. Consisting of various database segments, MAPTIS provides the user with information such as material properties, test data derived from tests specifically conducted for qualification of materials for use in space, verification and control, project management, material information, and various administrative requirements. A recent addition to the project management segment consists of materials data derived from the LDEF flight. This tremendous quantity of data consists of both pre-flight and post-flight data in such diverse areas as optical/thermal, mechanical and electrical properties, atomic concentration surface analysis data, as well as general data such as sample placement on the satellite, A-O flux, equivalent sun hours, etc. Each data point is referenced to the primary investigator(s) and the published paper from which the data was taken. The MAPTIS system is envisioned to become the central location for all LDEF materials data. This paper consists of multiple parts, comprising a general overview of the MAPTIS System and the types of data contained within, and the specific LDEF data element and the data contained in that segment.
Bazm, Soheila; Kalantar, Seyyed Mehdi; Mirzaei, Masoud
2016-06-01
To meet the future challenges in the field of reproductive medicine in Iran, better understanding of published studies is needed. Bibliometric methods and social network analysis have been used to measure the scope and illustrate scientific output of researchers in this field. This study provides insight into the structure of the network of Iranian papers published in the field of reproductive medicine through 2010-2014. In this cross-sectional study, all relevant scientific publications were retrieved from Scopus database and were analyzed according to document type, journal of publication, hot topics, authors and institutions. The results were mapped and clustered by VosViewer software. In total, 3141 papers from Iranian researchers were identified in Scopus database between 2010-2014. The numbers of publications per year have been increased from 461 in 2010 to 749 in 2014. Tehran University of Medical Sciences and "Soleimani M" are occupied the top position based on Productivity indicator. Likewise "Soleimani M" was obtained the first rank among authors according to degree centrality, betweenness centrality and collaboration criteria. In addition, among institutions, Iranian Academic Center for Education, Culture and Research (ACECR) was leader based on degree centrality, betweenness centrality and collaboration indicators. Publications of Iranian researchers in the field of reproductive medicine showed steadily growth during 2010-2014. It seems that in addition to quantity, Iranian authors have to promote quality of articles and collaboration. It will help them to advance their efforts.
Bazm, Soheila; Kalantar, Seyyed Mehdi; Mirzaei, Masoud
2016-01-01
Background: To meet the future challenges in the field of reproductive medicine in Iran, better understanding of published studies is needed. Bibliometric methods and social network analysis have been used to measure the scope and illustrate scientific output of researchers in this field. Objective: This study provides insight into the structure of the network of Iranian papers published in the field of reproductive medicine through 2010-2014. Materials and Methods: In this cross-sectional study, all relevant scientific publications were retrieved from Scopus database and were analyzed according to document type, journal of publication, hot topics, authors and institutions. The results were mapped and clustered by VosViewer software. Results: In total, 3141 papers from Iranian researchers were identified in Scopus database between 2010-2014. The numbers of publications per year have been increased from 461 in 2010 to 749 in 2014. Tehran University of Medical Sciences and "Soleimani M" are occupied the top position based on Productivity indicator. Likewise "Soleimani M" was obtained the first rank among authors according to degree centrality, betweenness centrality and collaboration criteria. In addition, among institutions, Iranian Academic Center for Education, Culture and Research (ACECR) was leader based on degree centrality, betweenness centrality and collaboration indicators. Conclusion: Publications of Iranian researchers in the field of reproductive medicine showed steadily growth during 2010-2014. It seems that in addition to quantity, Iranian authors have to promote quality of articles and collaboration. It will help them to advance their efforts. PMID:27525320
Anatomical entity mention recognition at literature scale
Pyysalo, Sampo; Ananiadou, Sophia
2014-01-01
Motivation: Anatomical entities ranging from subcellular structures to organ systems are central to biomedical science, and mentions of these entities are essential to understanding the scientific literature. Despite extensive efforts to automatically analyze various aspects of biomedical text, there have been only few studies focusing on anatomical entities, and no dedicated methods for learning to automatically recognize anatomical entity mentions in free-form text have been introduced. Results: We present AnatomyTagger, a machine learning-based system for anatomical entity mention recognition. The system incorporates a broad array of approaches proposed to benefit tagging, including the use of Unified Medical Language System (UMLS)- and Open Biomedical Ontologies (OBO)-based lexical resources, word representations induced from unlabeled text, statistical truecasing and non-local features. We train and evaluate the system on a newly introduced corpus that substantially extends on previously available resources, and apply the resulting tagger to automatically annotate the entire open access scientific domain literature. The resulting analyses have been applied to extend services provided by the Europe PubMed Central literature database. Availability and implementation: All tools and resources introduced in this work are available from http://nactem.ac.uk/anatomytagger. Contact: sophia.ananiadou@manchester.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24162468
Hayes, Laura; Horn, Marilee A.
2009-01-01
The U.S. Geological Survey, in cooperation with the New Hampshire Department of Environmental Services, estimated the amount of water demand, consumptive use, withdrawal, and return flow for each U.S. Census block in New Hampshire for the years 2005 (current) and 2020. Estimates of domestic, commercial, industrial, irrigation, and other nondomestic water use were derived through the use and innovative integration of several State and Federal databases, and by use of previously developed techniques. The New Hampshire Water Demand database was created as part of this study to store and integrate State of New Hampshire data central to the project. Within the New Hampshire Water Demand database, a lookup table was created to link the State databases and identify water users common to more than one database. The lookup table also allowed identification of withdrawal and return-flow locations of registered and unregistered commercial, industrial, agricultural, and other nondomestic users. Geographic information system data from the State were used in combination with U.S. Census Bureau spatial data to locate and quantify withdrawals and return flow for domestic users in each census block. Analyzing and processing the most recently available data resulted in census-block estimations of 2005 water use. Applying population projections developed by the State to the data sets enabled projection of water use for the year 2020. The results for each census block are stored in the New Hampshire Water Demand database and may be aggregated to larger political areas or watersheds to assess relative hydrologic stress on the basis of current and potential water availability.
The Olive Branch and the Hammer: A Strategic Analysis of Hawala in the Financial War on Terrorism
2008-03-01
in Pakistan has been greatly affected. “Hawala business is 75% gone now,” according to Malik Bostan, president of the Forex Association of...voluntarily. Furthermore, the Central Bank has tried to solicit the information of the remitters and beneficiaries to keep in a central database
Liu, Yifeng; Liang, Yongjie; Wishart, David
2015-07-01
PolySearch2 (http://polysearch.ca) is an online text-mining system for identifying relationships between biomedical entities such as human diseases, genes, SNPs, proteins, drugs, metabolites, toxins, metabolic pathways, organs, tissues, subcellular organelles, positive health effects, negative health effects, drug actions, Gene Ontology terms, MeSH terms, ICD-10 medical codes, biological taxonomies and chemical taxonomies. PolySearch2 supports a generalized 'Given X, find all associated Ys' query, where X and Y can be selected from the aforementioned biomedical entities. An example query might be: 'Find all diseases associated with Bisphenol A'. To find its answers, PolySearch2 searches for associations against comprehensive collections of free-text collections, including local versions of MEDLINE abstracts, PubMed Central full-text articles, Wikipedia full-text articles and US Patent application abstracts. PolySearch2 also searches 14 widely used, text-rich biological databases such as UniProt, DrugBank and Human Metabolome Database to improve its accuracy and coverage. PolySearch2 maintains an extensive thesaurus of biological terms and exploits the latest search engine technology to rapidly retrieve relevant articles and databases records. PolySearch2 also generates, ranks and annotates associative candidates and present results with relevancy statistics and highlighted key sentences to facilitate user interpretation. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Chang, Suhua; Zhang, Jiajie; Liao, Xiaoyun; Zhu, Xinxing; Wang, Dahai; Zhu, Jiang; Feng, Tao; Zhu, Baoli; Gao, George F; Wang, Jian; Yang, Huanming; Yu, Jun; Wang, Jing
2007-01-01
Frequent outbreaks of highly pathogenic avian influenza and the increasing data available for comparative analysis require a central database specialized in influenza viruses (IVs). We have established the Influenza Virus Database (IVDB) to integrate information and create an analysis platform for genetic, genomic, and phylogenetic studies of the virus. IVDB hosts complete genome sequences of influenza A virus generated by Beijing Institute of Genomics (BIG) and curates all other published IV sequences after expert annotation. Our Q-Filter system classifies and ranks all nucleotide sequences into seven categories according to sequence content and integrity. IVDB provides a series of tools and viewers for comparative analysis of the viral genomes, genes, genetic polymorphisms and phylogenetic relationships. A search system has been developed for users to retrieve a combination of different data types by setting search options. To facilitate analysis of global viral transmission and evolution, the IV Sequence Distribution Tool (IVDT) has been developed to display the worldwide geographic distribution of chosen viral genotypes and to couple genomic data with epidemiological data. The BLAST, multiple sequence alignment and phylogenetic analysis tools were integrated for online data analysis. Furthermore, IVDB offers instant access to pre-computed alignments and polymorphisms of IV genes and proteins, and presents the results as SNP distribution plots and minor allele distributions. IVDB is publicly available at http://influenza.genomics.org.cn.
Liu, Yifeng; Liang, Yongjie; Wishart, David
2015-01-01
PolySearch2 (http://polysearch.ca) is an online text-mining system for identifying relationships between biomedical entities such as human diseases, genes, SNPs, proteins, drugs, metabolites, toxins, metabolic pathways, organs, tissues, subcellular organelles, positive health effects, negative health effects, drug actions, Gene Ontology terms, MeSH terms, ICD-10 medical codes, biological taxonomies and chemical taxonomies. PolySearch2 supports a generalized ‘Given X, find all associated Ys’ query, where X and Y can be selected from the aforementioned biomedical entities. An example query might be: ‘Find all diseases associated with Bisphenol A’. To find its answers, PolySearch2 searches for associations against comprehensive collections of free-text collections, including local versions of MEDLINE abstracts, PubMed Central full-text articles, Wikipedia full-text articles and US Patent application abstracts. PolySearch2 also searches 14 widely used, text-rich biological databases such as UniProt, DrugBank and Human Metabolome Database to improve its accuracy and coverage. PolySearch2 maintains an extensive thesaurus of biological terms and exploits the latest search engine technology to rapidly retrieve relevant articles and databases records. PolySearch2 also generates, ranks and annotates associative candidates and present results with relevancy statistics and highlighted key sentences to facilitate user interpretation. PMID:25925572
Pan, Jeng-Jong; Nahm, Meredith; Wakim, Paul; Cushing, Carol; Poole, Lori; Tai, Betty; Pieper, Carl F
2009-02-01
Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the network's trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%-1.0% before centralization to 0.01-0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less costly, with higher data quality.
Next Generation Models for Storage and Representation of Microbial Biological Annotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quest, Daniel J; Land, Miriam L; Brettin, Thomas S
2010-01-01
Background Traditional genome annotation systems were developed in a very different computing era, one where the World Wide Web was just emerging. Consequently, these systems are built as centralized black boxes focused on generating high quality annotation submissions to GenBank/EMBL supported by expert manual curation. The exponential growth of sequence data drives a growing need for increasingly higher quality and automatically generated annotation. Typical annotation pipelines utilize traditional database technologies, clustered computing resources, Perl, C, and UNIX file systems to process raw sequence data, identify genes, and predict and categorize gene function. These technologies tightly couple the annotation software systemmore » to hardware and third party software (e.g. relational database systems and schemas). This makes annotation systems hard to reproduce, inflexible to modification over time, difficult to assess, difficult to partition across multiple geographic sites, and difficult to understand for those who are not domain experts. These systems are not readily open to scrutiny and therefore not scientifically tractable. The advent of Semantic Web standards such as Resource Description Framework (RDF) and OWL Web Ontology Language (OWL) enables us to construct systems that address these challenges in a new comprehensive way. Results Here, we develop a framework for linking traditional data to OWL-based ontologies in genome annotation. We show how data standards can decouple hardware and third party software tools from annotation pipelines, thereby making annotation pipelines easier to reproduce and assess. An illustrative example shows how TURTLE (Terse RDF Triple Language) can be used as a human readable, but also semantically-aware, equivalent to GenBank/EMBL files. Conclusions The power of this approach lies in its ability to assemble annotation data from multiple databases across multiple locations into a representation that is understandable to researchers. In this way, all researchers, experimental and computational, will more easily understand the informatics processes constructing genome annotation and ultimately be able to help improve the systems that produce them.« less
Assembling proteomics data as a prerequisite for the analysis of large scale experiments
Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleißner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R
2009-01-01
Background Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. Results In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. Conclusion The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk. PMID:19166578
The plant phenological online database (PPODB): an online database for long-term phenological data.
Dierenbach, Jonas; Badeck, Franz-W; Schaber, Jörg
2013-09-01
We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Hacker, Robert I.; Garcia, Lorena De Marco; Chawla, Ankur; Panetta, Thomas F.
2012-01-01
Fibrin sheaths are a heterogeneous matrix of cells and debris that form around catheters and are a known cause of central venous stenosis and catheter failure. A total of 50 cases of central venous catheter fibrin sheath angioplasty (FSA) after catheter removal or exchange are presented. A retrospective review of an outpatient office database identified 70 eligible patients over a 19-month period. After informed consent was obtained, the dialysis catheter exiting the skin was clamped, amputated, and a wire was inserted. The catheter was then removed and a 9-French sheath was inserted into the superior vena cava, a venogram was performed. If a fibrin sheath was present, angioplasty was performed using an 8 × 4 or 10 × 4 balloon along the entire length of the fibrin sheath. A completion venogram was performed to document obliteration of the sheath. During the study, 50 patients were diagnosed with a fibrin sheath, and 43 had no pre-existing central venous stenosis. After FSA, 39 of the 43 patient's (91%) central systems remained patent without the need for subsequent interventions; 3 patients (7%) developed subclavian stenoses requiring repeat angioplasty and stenting; 1 patent (2.3%) developed an occlusion requiring a reintervention. Seven patients with prior central stenosis required multiple angioplasties; five required stenting of their central lesions. Every patient had follow-up fistulograms to document long-term patency. We propose that FSA is a prudent and safe procedure that may help reduce the risk of central venous stenosis from fibrin sheaths due to central venous catheters. PMID:23997555
Issues central to a useful image understanding environment
NASA Astrophysics Data System (ADS)
Beveridge, J. Ross; Draper, Bruce A.; Hanson, Allen R.; Riseman, Edward M.
1992-04-01
A recent DARPA initiative has sparked interested in software environments for computer vision. The goal is a single environment to support both basic research and technology transfer. This paper lays out six fundamental attributes such a system must possess: (1) support for both C and Lisp, (2) extensibility, (3) data sharing, (4) data query facilities tailored to vision, (5) graphics, and (6) code sharing. The first three attributes fundamentally constrain the system design. Support for both C and Lisp demands some form of database or data-store for passing data between languages. Extensibility demands that system support facilities, such as spatial retrieval of data, be readily extended to new user-defined datatypes. Finally, data sharing demands that data saved by one user, including data of a user-defined type, must be readable by another user.
Cyclic subway networks are less risky in metropolises
NASA Astrophysics Data System (ADS)
Xiao, Ying; Zhang, Hai-Tao; Xu, Bowen; Zhu, Tao; Chen, Guanrong; Chen, Duxin
2018-02-01
Subways are crucial in modern transportation systems of metropolises. To quantitatively evaluate the potential risks of subway networks suffered from natural disasters or deliberate attacks, real data from seven Chinese subway systems are collected and their population distributions and anti-risk capabilities are analyzed. Counterintuitively, it is found that transfer stations with large numbers of connections are not the most crucial, but the stations and lines with large betweenness centrality are essential, if subway networks are being attacked. It is also found that cycles reduce such correlations due to the existence of alternative paths. To simulate the data-based observations, a network model is proposed to characterize the dynamics of subway systems under various intensities of attacks on stations and lines. This study sheds some light onto risk assessment of subway networks in metropolitan cities.
Development of web-based services for an ensemble flood forecasting and risk assessment system
NASA Astrophysics Data System (ADS)
Yaw Manful, Desmond; He, Yi; Cloke, Hannah; Pappenberger, Florian; Li, Zhijia; Wetterhall, Fredrik; Huang, Yingchun; Hu, Yuzhong
2010-05-01
Flooding is a wide spread and devastating natural disaster worldwide. Floods that took place in the last decade in China were ranked the worst amongst recorded floods worldwide in terms of the number of human fatalities and economic losses (Munich Re-Insurance). Rapid economic development and population expansion into low lying flood plains has worsened the situation. Current conventional flood prediction systems in China are neither suited to the perceptible climate variability nor the rapid pace of urbanization sweeping the country. Flood prediction, from short-term (a few hours) to medium-term (a few days), needs to be revisited and adapted to changing socio-economic and hydro-climatic realities. The latest technology requires implementation of multiple numerical weather prediction systems. The availability of twelve global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a good opportunity for an effective state-of-the-art early forecasting system. A prototype of a Novel Flood Early Warning System (NEWS) using the TIGGE database is tested in the Huai River basin in east-central China. It is the first early flood warning system in China that uses the massive TIGGE database cascaded with river catchment models, the Xinanjiang hydrologic model and a 1-D hydraulic model, to predict river discharge and flood inundation. The NEWS algorithm is also designed to provide web-based services to a broad spectrum of end-users. The latter presents challenges as both databases and proprietary codes reside in different locations and converge at dissimilar times. NEWS will thus make use of a ready-to-run grid system that makes distributed computing and data resources available in a seamless and secure way. An ability to run or function on different operating systems and provide an interface or front that is accessible to broad spectrum of end-users is additional requirement. The aim is to achieve robust interoperability through strong security and workflow capabilities. A physical network diagram and a work flow scheme of all the models, codes and databases used to achieve the NEWS algorithm are presented. They constitute a first step in the development of a platform for providing real time flood forecasting services on the web to mitigate 21st century weather phenomena.
Managing hybrid marketing systems.
Moriarty, R T; Moran, U
1990-01-01
As competition increases and costs become critical, companies that once went to market only one way are adding new channels and using new methods - creating hybrid marketing systems. These hybrid marketing systems hold the promise of greater coverage and reduced costs. But they are also hard to manage; they inevitably raise questions of conflict and control: conflict because marketing units compete for customers; control because new indirect channels are less subject to management authority. Hard as they are to manage, however, hybrid marketing systems promise to become the dominant design, replacing the "purebred" channel strategy in all kinds of businesses. The trick to managing the hybrid is to analyze tasks and channels within and across a marketing system. A map - the hybrid grid - can help managers make sense of their hybrid system. What the chart reveals is that channels are not the basic building blocks of a marketing system; marketing tasks are. The hybrid grid forces managers to consider various combinations of channels and tasks that will optimize both cost and coverage. Managing conflict is also an important element of a successful hybrid system. Managers should first acknowledge the inevitability of conflict. Then they should move to bound it by creating guidelines that spell out which customers to serve through which methods. Finally, a marketing and sales productivity (MSP) system, consisting of a central marketing database, can act as the central nervous system of a hybrid marketing system, helping managers create customized channels and service for specific customer segments.
An integrated command control and communications center for first responders
NASA Astrophysics Data System (ADS)
Messner, Richard A.; Hludik, Frank; Vidacic, Dragan; Melnyk, Pavlo
2005-05-01
First responders to a major incident include many different agencies. These may include law enforcement officers, multiple fire departments, paramedics, HAZMAT response teams, and possibly even federal personnel such as FBI and FEMA. Often times multiple jurisdictions respond to the incident which causes interoperability issues with respect to communication and dissemination of time critical information. Accurate information from all responding sources needs to be rapidly collected and made available to the current on site responders as well as the follow-on responders who may just be arriving on scene. The creation of a common central database with a simple easy to use interface that is dynamically updated in real time would allow prompt and efficient information distribution between different jurisdictions. Such a system is paramount to the success of any response to a major incident. First responders typically arrive in mobile vehicles that are equipped with communications equipment. Although the first responders may make reports back to their specific home based command centers, the details of those reports are not typically available to other first responders who are not a part of that agencies infrastructure. Furthermore, the collection of information often occurs outside of the first responder vehicle and the details of the scene are normally either radioed from the field or written down and then disseminated after significant delay. Since first responders are not usually on the same communications channels, and the fact that there is normally a considerable amount of confusion during the first few hours on scene, it would be beneficial if there were a centralized location for the repository of time critical information which could be accessed by all the first responders in a common fashion without having to redesign or add significantly to each first responders hardware/software systems. Each first responder would then be able to provide information regarding their particular situation and such information could be accessed by all responding personnel. This will require the transmission of information provided by the first responder to a common central database system. In order to fully investigate the use of technology, it is advantageous to build a test bed in order to evaluate the proper hardware/software necessary, and explore the envisioned scenarios of operation before deployment of an actual system. This paper describes an ongoing effort at the University of New Hampshire to address these emergency responder needs.
Root resorption during orthodontic treatment.
Walker, Sally
2010-01-01
Medline, Embase, LILACS, The Cochrane Library (Cochrane Database of Systematic Reviews, CENTRAL, and Cochrane Oral Health Group Trials Register) Web of Science, EBM Reviews, Computer Retrieval of Information on Scientific Project (CRISP, www.crisp.cit.nih.gov), On-Line Computer Library Center (www.oclc.org), Google Index to Scientific and Technical Proceedings, PAHO (www.paho.org), WHOLis (www.who.int/library/databases/en), BBO (Brazilian Bibliography of Dentistry), CEPS (Chinese Electronic Periodical Services), Conference materials (www.bl.uk/services/bsds/dsc/conference.html), ProQuest Dissertation Abstracts and Thesis database, TrialCentral (www.trialscentral.org), National Research Register (www.controlled-trials.com), www.Clinicaltrials.gov and SIGLE (System for Information on Grey Literature in Europe). Randomised controlled trials including split mouth design, recording the presence or absence of external apical root resorption (EARR) by treatment group at the end of the treatment period. Data were extracted independently by two reviewers using specially designed and piloted forms. Quality was also assessed independently by the same reviewers. After evaluating titles and abstracts, 144 full articles were obtained of which 13 articles, describing 11 trials, fulfilled the criteria for inclusion. Differences in the methodological approaches and reporting results made quantitative statistical comparisons impossible. Evidence suggests that comprehensive orthodontic treatment causes increased incidence and severity of root resorption, and heavy forces might be particularly harmful. Orthodontically induced inflammatory root resorption is unaffected by archwire sequencing, bracket prescription, and self-ligation. Previous trauma and tooth morphology are unlikely causative factors. There is some evidence that a two- to three-month pause in treatment decreases total root resorption. The results were inconclusive in the clinical management of root resorption, but there is evidence to support the use of light forces, especially with incisor intrusion.
Creating of Central Geospatial Database of the Slovak Republic and Procedures of its Revision
NASA Astrophysics Data System (ADS)
Miškolci, M.; Šafář, V.; Šrámková, R.
2016-06-01
The article describes the creation of initial three dimensional geodatabase from planning and designing through the determination of technological and manufacturing processes to practical using of Central Geospatial Database (CGD - official name in Slovak language is Centrálna Priestorová Databáza - CPD) and shortly describes procedures of its revision. CGD ensures proper collection, processing, storing, transferring and displaying of digital geospatial information. CGD is used by Ministry of Defense (MoD) for defense and crisis management tasks and by Integrated rescue system. For military personnel CGD is run on MoD intranet, and for other users outside of MoD is transmutated to ZbGIS (Primary Geodatabase of Slovak Republic) and is run on public web site. CGD is a global set of geo-spatial information. CGD is a vector computer model which completely covers entire territory of Slovakia. Seamless CGD is created by digitizing of real world using of photogrammetric stereoscopic methods and measurements of objects properties. Basic vector model of CGD (from photogrammetric processing) is then taken out to the field for inspection and additional gathering of objects properties in the whole area of mapping. Finally real-world objects are spatially modeled as a entities of three-dimensional database. CGD gives us opportunity, to get know the territory complexly in all the three spatial dimensions. Every entity in CGD has recorded the time of collection, which allows the individual to assess the timeliness of information. CGD can be utilized for the purposes of geographical analysis, geo-referencing, cartographic purposes as well as various special-purpose mapping and has the ambition to cover the needs not only the MoD, but to become a reference model for the national geographical infrastructure.
Exploring public databases to characterize urban flood risks in Amsterdam
NASA Astrophysics Data System (ADS)
Gaitan, Santiago; ten Veldhuis, Marie-claire; van de Giesen, Nick
2015-04-01
Cities worldwide are challenged by increasing urban flood risks. Precise and realistic measures are required to decide upon investment to reduce their impacts. Obvious flooding factors affecting flood risk include sewer systems performance and urban topography. However, currently implemented sewer and topographic models do not provide realistic predictions of local flooding occurrence during heavy rain events. Assessing other factors such as spatially distributed rainfall and socioeconomic characteristics may help to explain probability and impacts of urban flooding. Several public databases were analyzed: complaints about flooding made by citizens, rainfall depths (15 min and 100 Ha spatio-temporal resolution), grids describing number of inhabitants, income, and housing price (1Ha and 25Ha resolution); and buildings age. Data analysis was done using Python and GIS programming, and included spatial indexing of data, cluster analysis, and multivariate regression on the complaints. Complaints were used as a proxy to characterize flooding impacts. The cluster analysis, run for all the variables except the complaints, grouped part of the grid-cells of central Amsterdam into a highly differentiated group, covering 10% of the analyzed area, and accounting for 25% of registered complaints. The configuration of the analyzed variables in central Amsterdam coincides with a high complaint count. Remaining complaints were evenly dispersed along other groups. An adjusted R2 of 0.38 in the multivariate regression suggests that explaining power can improve if additional variables are considered. While rainfall intensity explained 4% of the incidence of complaints, population density and building age significantly explained around 20% each. Data mining of public databases proved to be a valuable tool to identify factors explaining variability in occurrence of urban pluvial flooding, though additional variables must be considered to fully explain flood risk variability.
DSSTox: New On-line Resource for Publishing Structure-Standardized Toxicity Databases
Ann M Richard1, Jamie Burch2, ClarLynda Williams3
1Nat. Health and Environ. Effects Res. Lb, US EP& Ret Triangle Park, NC 27711; 2EPA-NC
Central Univ Student COOP, US EPA, lies. Tri...
ERIC Educational Resources Information Center
Bowman, Benjamin F.
For the past two decades the central Information Retrieval Services of the Max Planck Society has been providing database searches for scientists in Max Planck Institutes and Research Groups throughout Germany. As a supplement to traditional search services offered by professional intermediaries, they have recently fostered the introduction of a…
O'Leary, Helen; Smart, Keith M; Moloney, Niamh A; Doody, Catherine M
2017-02-01
Research suggests that peripheral and central nervous system sensitization can contribute to the overall pain experience in peripheral musculoskeletal (MSK) conditions. It is unclear, however, whether sensitization of the nervous system results in poorer outcomes following the treatment. This systematic review investigated whether nervous system sensitization in peripheral MSK conditions predicts poorer clinical outcomes in response to a surgical or conservative intervention. Four electronic databases were searched to identify the relevant studies. Eligible studies had a prospective design, with a follow-up assessing the outcome in terms of pain or disability. Studies that used baseline indices of nervous system sensitization were included, such as quantitative sensory testing (QST) or questionnaires that measured centrally mediated symptoms. Thirteen studies met the inclusion criteria, of which six were at a high risk of bias. The peripheral MSK conditions investigated were knee and hip osteoarthritis, shoulder pain, and elbow tendinopathy. QST parameters indicative of sensitization (lower electrical pain thresholds, cold hyperalgesia, enhanced temporal summation, lower punctate sharpness thresholds) were associated with negative outcome (more pain or disability) in 5 small exploratory studies. Larger studies that accounted for multiple confounders in design and analysis did not support a predictive relationship between QST parameters and outcome. Two studies used self-report measures to capture comorbid centrally mediated symptoms, and found higher questionnaire scores were independently predictive of more persistent pain following a total joint arthroplasty. This systematic review found insufficient evidence to support an independent predictive relationship between QST measures of nervous system sensitization and treatment outcome. Self-report measures demonstrated better predictive ability. Further high-quality prognostic research is warranted. © 2016 World Institute of Pain.
Adapting the CUAHSI Hydrologic Information System to OGC standards
NASA Astrophysics Data System (ADS)
Valentine, D. W.; Whitenack, T.; Zaslavsky, I.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) provides web and desktop client access to hydrologic observations via water data web services using an XML schema called “WaterML”. The WaterML 1.x specification and the corresponding Water Data Services have been the backbone of the HIS service-oriented architecture (SOA) and have been adopted for serving hydrologic data by several federal agencies and many academic groups. The central discovery service, HIS Central, is based on an metadata catalog that references 4.7 billion observations, organized as 23 million data series from 1.5 million sites from 51 organizations. Observations data are published using HydroServer nodes that have been deployed at 18 organizations. Usage of HIS has increased by 8x from 2008 to 2010, and doubled in usage from 1600 data series a day in 2009 to 3600 data series a day in the first half of 2010. The HIS central metadata catalog currently harvests information from 56 Water Data Services. We collaborate on the catalog updates with two federal partners, USGS and US EPA: their data series are periodically reloaded into the HIS metadata catalog. We are pursuing two main development directions in the HIS project: Cloud-based computing, and further compliance with Open Geospatial Consortium (OGC) standards. The goal of moving to cloud-computing is to provide a scalable collaborative system with a simpler deployment and less dependence of hardware maintenance and staff. This move requires re-architecting the information models underlying the metadata catalog, and Water Data Services to be independent of the underlying relational database model, allowing for implementation on both relational databases, and cloud-based processing systems. Cloud-based HIS central resources can be managed collaboratively; partners share responsibility for their metadata by publishing data series information into the centralized catalog. Publishing data series will use REST-based service interfaces, like OData, as the basis for ingesting data series information into a cloud-hosted catalog. The future HIS services involve providing information via OGC Standards that will allow for observational data access from commercial GIS applications. Use of standards will allow for tools to access observational data from other projects using standards, such as the Ocean Observatories Initiative, and for tools from such projects to be integrated into the HIS toolset. With international collaborators, we have been developing a water information exchange language called “WaterML 2.0” which will be used to deliver observations data over OGC Sensor Observation Services (SOS). A software stack of OGC standard services will provide access to HIS information. In addition to SOS, Web Mapping and Feature Services (WMS, and WFS) will provide access to location information. Catalog Services for the Web (CSW) will provide a catalog for water information that is both centralized, and distributed. We intend the OGC standards supplement the existing HIS service interfaces, rather than replace the present service interfaces. The ultimate goal of this development is expand access to hydrologic observations data, and create an environment where these data can be seamlessly integrated with standards-compliant data resources.
Hine, A.C.; Brooks, G.R.; Davis, R.A.; Duncan, D.S.; Locker, S.D.; Twichell, D.C.; Gelfenbaum, G.
2003-01-01
This paper provides an overview for this special publication on the geologic framework of the inner shelf and coastal zone of west-central Florida. This is a significant geologic setting in that it lies at the center of an ancient carbonate platform facing an enormous ramp that has exerted large-scale control on coastal geomorphology, the availability of sediments, and the level of wave energy. In order to understand the Holocene geologic history of this depositional system, a regional study defined by natural boundaries (north end of a barrier island to the apex of a headland) was undertaken by a group of government and university coastal geologists using a wide variety of laboratory and field techniques. It is the purpose of this introductory paper to define the character of this coastal/inner shelf system, provide a historical geologic perspective and background of environmental information, define the overall database, present the collective objectives of this regional study, and very briefly present the main aspects of each contribution. Specific conclusions are presented at the end of each paper composing this volume. ?? 2003 Elsevier B.V. All rights reserved.
Research and development of a digital design system for hull structures
NASA Astrophysics Data System (ADS)
Zhan, Yi-Ting; Ji, Zhuo-Shang; Liu, Yin-Dong
2007-06-01
Methods used for digital ship design were studied and formed the basis of a proposed frame model suitable for ship construction modeling. Based on 3-D modeling software, a digital design system for hull structures was developed. Basic software systems for modeling, modifying, and assembly simulation were developed. The system has good compatibility, and models created by it can be saved in different 3-D file formats, and 2D engineering drawings can be output directly. The model can be modified dynamically, overcoming the necessity of repeated modifications during hull structural design. Through operations such as model construction, intervention inspection, and collision detection, problems can be identified and modified during the hull structural design stage. Technologies for centralized control of the system, database management, and 3-D digital design are integrated into this digital model in the preliminary design stage of shipbuilding.
[Post-marketing surveillance systems for psychoactive prescription drug abuse].
Nordmann, Sandra; Frauger, Elisabeth; Pauly, Vanessa; Rouby, Frank; Mallaret, Michel; Micallef, Joëlle; Thirion, Xavier
2011-01-01
Drugs affecting the central nervous system form a unique group of products for surveillance because they could be misused, abused or diverted. Considering the characteristics of this behaviour that is often concealed, specific post-marketing surveillance systems have been developed to monitor abuse of prescription drugs in some countries. The purpose of this review is to list and to describe post-marketing surveillance systems, according their methodology, in France and in foreign countries. These programs are based on adverse effect notifications, medical or legal consequences of abuse, general or specific population-based survey, professional networks or medication databases. Some programs use simultaneously several information sources. In conclusion, the multifaceted nature, the diversity and the inventiveness of post-marketing surveillance systems reflects the complexity of the abuse issue. © 2011 Société Française de Pharmacologie et de Thérapeutique.
Carroll, A E; Saluja, S; Tarczy-Hornoch, P
2001-01-01
Personal Digital Assistants (PDAs) offer clinicians the ability to enter and manage critical information at the point of care. Although PDAs have always been designed to be intuitive and easy to use, recent advances in technology have made them even more accessible. The ability to link data on a PDA (client) to a central database (server) allows for near-unlimited potential in developing point of care applications and systems for patient data management. Although many stand-alone systems exist for PDAs, none are designed to work in an integrated client/server environment. This paper describes the design, software and hardware selection, and preliminary testing of a PDA based patient data and charting system for use in the University of Washington Neonatal Intensive Care Unit (NICU). This system will be the subject of a subsequent study to determine its impact on patient outcomes and clinician efficiency.
Marenco, Luis N.; Wang, Rixin; Bandrowski, Anita E.; Grethe, Jeffrey S.; Shepherd, Gordon M.; Miller, Perry L.
2014-01-01
This paper describes how DISCO, the data aggregator that supports the Neuroscience Information Framework (NIF), has been extended to play a central role in automating the complex workflow required to support and coordinate the NIF’s data integration capabilities. The NIF is an NIH Neuroscience Blueprint initiative designed to help researchers access the wealth of data related to the neurosciences available via the Internet. A central component is the NIF Federation, a searchable database that currently contains data from 231 data and information resources regularly harvested, updated, and warehoused in the DISCO system. In the past several years, DISCO has greatly extended its functionality and has evolved to play a central role in automating the complex, ongoing process of harvesting, validating, integrating, and displaying neuroscience data from a growing set of participating resources. This paper provides an overview of DISCO’s current capabilities and discusses a number of the challenges and future directions related to the process of coordinating the integration of neuroscience data within the NIF Federation. PMID:25018728
Marenco, Luis N; Wang, Rixin; Bandrowski, Anita E; Grethe, Jeffrey S; Shepherd, Gordon M; Miller, Perry L
2014-01-01
This paper describes how DISCO, the data aggregator that supports the Neuroscience Information Framework (NIF), has been extended to play a central role in automating the complex workflow required to support and coordinate the NIF's data integration capabilities. The NIF is an NIH Neuroscience Blueprint initiative designed to help researchers access the wealth of data related to the neurosciences available via the Internet. A central component is the NIF Federation, a searchable database that currently contains data from 231 data and information resources regularly harvested, updated, and warehoused in the DISCO system. In the past several years, DISCO has greatly extended its functionality and has evolved to play a central role in automating the complex, ongoing process of harvesting, validating, integrating, and displaying neuroscience data from a growing set of participating resources. This paper provides an overview of DISCO's current capabilities and discusses a number of the challenges and future directions related to the process of coordinating the integration of neuroscience data within the NIF Federation.
Karbalaei, Reza; Allahyari, Marzieh; Rezaei-Tavirani, Mostafa; Asadzadeh-Aghdaei, Hamid; Zali, Mohammad Reza
2018-01-01
Analysis reconstruction networks from two diseases, NAFLD and Alzheimer`s diseases and their relationship based on systems biology methods. NAFLD and Alzheimer`s diseases are two complex diseases, with progressive prevalence and high cost for countries. There are some reports on relation and same spreading pathways of these two diseases. In addition, they have some similar risk factors, exclusively lifestyle such as feeding, exercises and so on. Therefore, systems biology approach can help to discover their relationship. DisGeNET and STRING databases were sources of disease genes and constructing networks. Three plugins of Cytoscape software, including ClusterONE, ClueGO and CluePedia, were used to analyze and cluster networks and enrichment of pathways. An R package used to define best centrality method. Finally, based on degree and Betweenness, hubs and bottleneck nodes were defined. Common genes between NAFLD and Alzheimer`s disease were 190 genes that used construct a network with STRING database. The resulting network contained 182 nodes and 2591 edges and comprises from four clusters. Enrichment of these clusters separately lead to carbohydrate metabolism, long chain fatty acid and regulation of JAK-STAT and IL-17 signaling pathways, respectively. Also seven genes selected as hub-bottleneck include: IL6, AKT1, TP53, TNF, JUN, VEGFA and PPARG. Enrichment of these proteins and their first neighbors in network by OMIM database lead to diabetes and obesity as ancestors of NAFLD and AD. Systems biology methods, specifically PPI networks, can be useful for analyzing complicated related diseases. Finding Hub and bottleneck proteins should be the goal of drug designing and introducing disease markers.
Evolving Privatization in Eastern and Central European Higher Education
ERIC Educational Resources Information Center
Levy, Daniel
2014-01-01
With the fall of communism in 1989, Eastern and Central Europe would quickly become part of an already strong global tide of privatization in higher education. Nowhere else did private higher education rise so suddenly or strongly from virtual nonexistence to a major regional presence. A fresh database allows us to analyze the extent and…
ERIC Educational Resources Information Center
Miller, Sarah; Maguire, Lisa K.; Macdonald, Geraldine
2011-01-01
The purpose of this research is to determine the effects of home-based programmes aimed specifically at improving developmental outcomes for preschool children from socially disadvantaged families. The authors searched the following databases between 7 October and 12 October 2010: Cochrane Central Register of Controlled Trials (CENTRAL) (2010,…
Building Airport Surface HITL Simulation Capability
NASA Technical Reports Server (NTRS)
Chinn, Fay Cherie
2016-01-01
FutureFlight Central is a high fidelity, real-time simulator designed to study surface operations and automation. As an air traffic control tower simulator, FFC allows stakeholders such as the FAA, controllers, pilots, airports, and airlines to develop and test advanced surface and terminal area concepts and automation including NextGen and beyond automation concepts and tools. These technologies will improve the safety, capacity and environmental issues facing the National Airspace system. FFC also has extensive video streaming capabilities, which combined with the 3-D database capability makes the facility ideal for any research needing an immersive virtual and or video environment. FutureFlight Central allows human in the loop testing which accommodates human interactions and errors giving a more complete picture than fast time simulations. This presentation describes FFCs capabilities and the components necessary to build an airport surface human in the loop simulation capability.
Nomenclature for the KIR of non-human species.
Robinson, James; Guethlein, Lisbeth A; Maccari, Giuseppe; Blokhuis, Jeroen; Bimber, Benjamin N; de Groot, Natasja G; Sanderson, Nicholas D; Abi-Rached, Laurent; Walter, Lutz; Bontrop, Ronald E; Hammond, John A; Marsh, Steven G E; Parham, Peter
2018-06-04
The increasing number of Killer Immunoglobulin-like Receptor (KIR) sequences available for non-human primate species and cattle has prompted development of a centralized database, guidelines for a standardized nomenclature, and minimum requirements for database submission. The guidelines and nomenclature are based on those used for human KIR and incorporate modifications made for inclusion of non-human species in the companion IPD-NHKIR database. Included in this first release are the rhesus macaque (Macaca mulatta), chimpanzee (Pan troglodytes), orangutan (Pongo abelii and Pongo pygmaeus), and cattle (Bos taurus).
Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob
2015-01-01
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general.