Some Reliability Issues in Very Large Databases.
ERIC Educational Resources Information Center
Lynch, Clifford A.
1988-01-01
Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…
ERIC Educational Resources Information Center
Fitzgibbons, Megan; Meert, Deborah
2010-01-01
The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…
GEOMAGIA50: An archeointensity database with PHP and MySQL
NASA Astrophysics Data System (ADS)
Korhonen, K.; Donadini, F.; Riisager, P.; Pesonen, L. J.
2008-04-01
The GEOMAGIA50 database stores 3798 archeomagnetic and paleomagnetic intensity determinations dated to the past 50,000 years. It also stores details of the measurement setup for each determination, which are used for ranking the data according to prescribed reliability criteria. The ranking system aims to alleviate the data reliability problem inherent in this kind of data. GEOMAGIA50 is based on two popular open source technologies. The MySQL database management system is used for storing the data, whereas the functionality and user interface are provided by server-side PHP scripts. This technical brief gives a detailed description of GEOMAGIA50 from a technical viewpoint.
A major challenge for scientists and regulators is accounting for the metabolic activation of chemicals that may lead to increased toxicity. Reliable forecasting of chemical metabolism is a critical factor in estimating a chemical’s toxic potential. Research is underway to develo...
[Health management system in outpatient follow-up of kidney transplantation patients].
Zhang, Hong; Xie, Jinliang; Yao, Hui; Liu, Ling; Tan, Jianwen; Geng, Chunmi
2014-07-01
To develop a health management system for outpatient follow-up of kidney transplant patients. Access 2010 database software was used to establish the health management system for kidney transplantation patients in Windows XP operating system. Database management and post-operation follow-up of the kidney transplantation patients were realized through 6 function modules including data input, data query, data printing, questionnaire survey, data export, and follow-up management. The system worked stably and reliably, and the data input was easy and fast. The query, the counting and printing were convenient. Health management system for patients after kidney transplantation not only reduces the work pressure of the follow-up staff, but also improves the efficiency of outpatient follow-up.
Comparison of hospital databases on antibiotic consumption in France, for a single management tool.
Henard, S; Boussat, S; Demoré, B; Clément, S; Lecompte, T; May, T; Rabaud, C
2014-07-01
The surveillance of antibiotic use in hospitals and of data on resistance is an essential measure for antibiotic stewardship. There are 3 national systems in France to collect data on antibiotic use: DREES, ICATB, and ATB RAISIN. We compared these databases and drafted recommendations for the creation of an optimized database of information on antibiotic use, available to all concerned personnel: healthcare authorities, healthcare facilities, and healthcare professionals. We processed and analyzed the 3 databases (2008 data), and surveyed users. The qualitative analysis demonstrated major discrepancies in terms of objectives, healthcare facilities, participation rate, units of consumption, conditions for collection, consolidation, and control of data, and delay before availability of results. The quantitative analysis revealed that the consumption data for a given healthcare facility differed from one database to another, challenging the reliability of data collection. We specified user expectations: to compare consumption and resistance data, to carry out benchmarking, to obtain data on the prescribing habits in healthcare units, or to help understand results. The study results demonstrated the need for a reliable, single, and automated tool to manage data on antibiotic consumption compared with resistance data on several levels (national, regional, healthcare facility, healthcare units), providing rapid local feedback and educational benchmarking. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
A proposed group management scheme for XTP multicast
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.
Corbellini, Carlo; Andreoni, Bruno; Ansaloni, Luca; Sgroi, Giovanni; Martinotti, Mario; Scandroglio, Ildo; Carzaniga, Pierluigi; Longoni, Mauro; Foschi, Diego; Dionigi, Paolo; Morandi, Eugenio; Agnello, Mauro
2018-01-01
Measurement and monitoring of the quality of care using a core set of quality measures are increasing in health service research. Although administrative databases include limited clinical data, they offer an attractive source for quality measurement. The purpose of this study, therefore, was to evaluate the completeness of different administrative data sources compared to a clinical survey in evaluating rectal cancer cases. Between May 2012 and November 2014, a clinical survey was done on 498 Lombardy patients who had rectal cancer and underwent surgical resection. These collected data were compared with the information extracted from administrative sources including Hospital Discharge Dataset, drug database, daycare activity data, fee-exemption database, and regional screening program database. The agreement evaluation was performed using a set of 12 quality indicators. Patient complexity was a difficult indicator to measure for lack of clinical data. Preoperative staging was another suboptimal indicator due to the frequent missing administrative registration of tests performed. The agreement between the 2 data sources regarding chemoradiotherapy treatments was high. Screening detection, minimally invasive techniques, length of stay, and unpreventable readmissions were detected as reliable quality indicators. Postoperative morbidity could be a useful indicator but its agreement was lower, as expected. Healthcare administrative databases are large and real-time collected repositories of data useful in measuring quality in a healthcare system. Our investigation reveals that the reliability of indicators varies between them. Ideally, a combination of data from both sources could be used in order to improve usefulness of less reliable indicators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buche, D. L.; Perry, S.
This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.
NPTool: Towards Scalability and Reliability of Business Process Management
NASA Astrophysics Data System (ADS)
Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton
Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.
DOT National Transportation Integrated Search
2005-02-01
This report presents STRUCTVIEW, a vehicle-based system for the measurement of roadway structure proles, which uses a scanning laser rangender to measure various structure and roadway features while traveling at highway speed. Measurement capab...
Jaïdi, Faouzi; Labbene-Ayachi, Faten; Bouhoula, Adel
2016-12-01
Nowadays, e-healthcare is a main advancement and upcoming technology in healthcare industry that contributes to setting up automated and efficient healthcare infrastructures. Unfortunately, several security aspects remain as main challenges towards secure and privacy-preserving e-healthcare systems. From the access control perspective, e-healthcare systems face several issues due to the necessity of defining (at the same time) rigorous and flexible access control solutions. This delicate and irregular balance between flexibility and robustness has an immediate impact on the compliance of the deployed access control policy. To address this issue, the paper defines a general framework to organize thinking about verifying, validating and monitoring the compliance of access control policies in the context of e-healthcare databases. We study the problem of the conformity of low level policies within relational databases and we particularly focus on the case of a medical-records management database defined in the context of a Medical Information System. We propose an advanced solution for deploying reliable and efficient access control policies. Our solution extends the traditional lifecycle of an access control policy and allows mainly managing the compliance of the policy. We refer to an example to illustrate the relevance of our proposal.
Application of China's National Forest Continuous Inventory database.
Xie, Xiaokui; Wang, Qingli; Dai, Limin; Su, Dongkai; Wang, Xinchuang; Qi, Guang; Ye, Yujing
2011-12-01
The maintenance of a timely, reliable and accurate spatial database on current forest ecosystem conditions and changes is essential to characterize and assess forest resources and support sustainable forest management. Information for such a database can be obtained only through a continuous forest inventory. The National Forest Continuous Inventory (NFCI) is the first level of China's three-tiered inventory system. The NFCI is administered by the State Forestry Administration; data are acquired by five inventory institutions around the country. Several important components of the database include land type, forest classification and ageclass/ age-group. The NFCI database in China is constructed based on 5-year inventory periods, resulting in some of the data not being timely when reports are issued. To address this problem, a forest growth simulation model has been developed to update the database for years between the periodic inventories. In order to aid in forest plan design and management, a three-dimensional virtual reality system of forest landscapes for selected units in the database (compartment or sub-compartment) has also been developed based on Virtual Reality Modeling Language. In addition, a transparent internet publishing system for a spatial database based on open source WebGIS (UMN Map Server) has been designed and utilized to enhance public understanding and encourage free participation of interested parties in the development, implementation, and planning of sustainable forest management.
Jeannerat, Damien
2017-01-01
The introduction of a universal data format to report the correlation data of 2D NMR spectra such as COSY, HSQC and HMBC spectra will have a large impact on the reliability of structure determination of small organic molecules. These lists of assigned cross peaks will bridge signals found in NMR 1D and 2D spectra and the assigned chemical structure. The record could be very compact, human and computer readable so that it can be included in the supplementary material of publications and easily transferred into databases of scientific literature and chemical compounds. The records will allow authors, reviewers and future users to test the consistency and, in favorable situations, the uniqueness of the assignment of the correlation data to the associated chemical structures. Ideally, the data format of the correlation data should include direct links to the NMR spectra to make it possible to validate their reliability and allow direct comparison of spectra. In order to take the full benefits of their potential, the correlation data and the NMR spectra should therefore follow any manuscript in the review process and be stored in open-access database after publication. Keeping all NMR spectra, correlation data and assigned structures together at all time will allow the future development of validation tools increasing the reliability of past and future NMR data. This will facilitate the development of artificial intelligence analysis of NMR spectra by providing a source of data than can be used efficiently because they have been validated or can be validated by future users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity
NASA Astrophysics Data System (ADS)
Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo
2015-05-01
The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buche, D. L.
This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.
Sharper Standards for Fund-Raising Integrity
ERIC Educational Resources Information Center
Peterson, Vance T.
2004-01-01
Recently, the Council for Advancement and Support of Education (CASE) revised its standards for annual gifts and campaigns. The purpose was to erase ambiguities, establish tighter definitions for certain types of gifts, and ensure greater clarity about best management practices. In addition, the effort created a reliable database of campaign…
NASA Astrophysics Data System (ADS)
Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.
2017-10-01
Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.
Report on Wind Turbine Subsystem Reliability - A Survey of Various Databases (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, S.
2013-07-01
Wind industry has been challenged by premature subsystem/component failures. Various reliability data collection efforts have demonstrated their values in supporting wind turbine reliability and availability research & development and industrial activities. However, most information on these data collection efforts are scattered and not in a centralized place. With the objective of getting updated reliability statistics of wind turbines and/or subsystems so as to benefit future wind reliability and availability activities, this report is put together based on a survey of various reliability databases that are accessible directly or indirectly by NREL. For each database, whenever feasible, a brief description summarizingmore » database population, life span, and data collected is given along with its features & status. Then selective results deemed beneficial to the industry and generated based on the database are highlighted. This report concludes with several observations obtained throughout the survey and several reliability data collection opportunities in the future.« less
PRAIRIEMAP: A GIS database for prairie grassland management in western North America
,
2003-01-01
The USGS Forest and Rangeland Ecosystem Science Center, Snake River Field Station (SRFS) maintains a database of spatial information, called PRAIRIEMAP, which is needed to address the management of prairie grasslands in western North America. We identify and collect spatial data for the region encompassing the historical extent of prairie grasslands (Figure 1). State and federal agencies, the primary entities responsible for management of prairie grasslands, need this information to develop proactive management strategies to prevent prairie-grassland wildlife species from being listed as Endangered Species, or to develop appropriate responses if listing does occur. Spatial data are an important component in documenting current habitat and other environmental conditions, which can be used to identify areas that have undergone significant changes in land cover and to identify underlying causes. Spatial data will also be a critical component guiding the decision processes for restoration of habitat in the Great Plains. As such, the PRAIRIEMAP database will facilitate analyses of large-scale and range-wide factors that may be causing declines in grassland habitat and populations of species that depend on it for their survival. Therefore, development of a reliable spatial database carries multiple benefits for land and wildlife management. The project consists of 3 phases: (1) identify relevant spatial data, (2) assemble, document, and archive spatial data on a computer server, and (3) develop and maintain the web site (http://prairiemap.wr.usgs.gov) for query and transfer of GIS data to managers and researchers.
Creation of the NaSCoRD Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William
This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos
1992-01-01
The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.
Au, Lewis; Turner, Natalie; Wong, Hui-Li; Field, Kathryn; Lee, Belinda; Boadle, David; Cooray, Prasad; Karikios, Deme; Kosmider, Suzanne; Lipton, Lara; Nott, Louise; Parente, Phillip; Tie, Jeanne; Tran, Ben; Wong, Rachel; Yip, Desmond; Shapiro, Jeremy; Gibbs, Peter
2018-04-01
Current efforts to understand patient management in clinical practice are largely based on clinician surveys with uncertain reliability. The TRACC (Treatment of Recurrent and Advanced Colorectal Cancer) database is a multisite registry collecting comprehensive treatment and outcome data on consecutive metastatic colorectal cancer (mCRC) patients at multiple sites across Australia. This study aims to determine the accuracy of oncologists' impressions of real-word practice by comparing clinicians' estimates to data captured by TRACC. Nineteen medical oncologists from nine hospitals contributing data to TRACC completed a 34-question survey regarding their impression of the management and outcomes of mCRC at their own practice and other hospitals contributing to the database. Responses were then compared with TRACC data to determine how closely their impressions reflected actual practice. Data on 1300 patients with mCRC were available. Median clinician estimated frequency of KRAS testing within 6 months of diagnosis was 80% (range: 20-100%); the TRACC documented rate was 43%. Clinicians generally overestimated the rates of first-line treatment, particularly in patients over 75 years. Estimate for bevacizumab in first line was 60% (35-80%) versus 49% in TRACC. Estimated rate for liver resection varied substantially (5-35%), and the estimated median (27%) was inconsistent with the TRACC rate (12%). Oncologists generally felt their practice was similar to other hospitals. Oncologists' estimates of current clinical practice varied and were discordant with the TRACC database, often with a tendency to overestimate interventions. Clinician surveys alone do not reliably capture contemporary clinical practices in mCRC. © 2017 John Wiley & Sons Australia, Ltd.
Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF
NASA Technical Reports Server (NTRS)
Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.
2001-01-01
The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.
A dedicated database system for handling multi-level data in systems biology.
Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens
2014-01-01
Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
The Golosiiv on-line plate archive database, management and maintenance
NASA Astrophysics Data System (ADS)
Pakuliak, L.; Sergeeva, T.
2007-08-01
We intend to create online version of the database of the MAO NASU plate archive as VO-compatible structures in accordance with principles, developed by the International Virtual Observatory Alliance in order to make them available for world astronomical community. The online version of the log-book database is constructed by means of MySQL+PHP. Data management system provides a user with user interface, gives a capability of detailed traditional form-filling radial search of plates, obtaining some auxiliary sampling, the listing of each collection and permits to browse the detail descriptions of collections. The administrative tool allows database administrator the data correction, enhancement with new data sets and control of the integrity and consistence of the database as a whole. The VO-compatible database is currently constructing under the demands and in the accordance with principles of international data archives and has to be strongly generalized in order to provide a possibility of data mining by means of standard interfaces and to be the best fitted to the demands of WFPDB Group for databases of the plate catalogues. On-going enhancements of database toward the WFPDB bring the problem of the verification of data to the forefront, as it demands the high degree of data reliability. The process of data verification is practically endless and inseparable from data management owing to a diversity of data errors nature, that means to a variety of ploys of their identification and fixing. The current status of MAO NASU glass archive forces the activity in both directions simultaneously: the enhancement of log-book database with new sets of observational data as well as generalized database creation and the cross-identification between them. The VO-compatible version of the database is supplying with digitized data of plates obtained with MicroTek ScanMaker 9800 XL TMA. The scanning procedure is not total but is conducted selectively in the frames of special projects.
Scientific information repository assisting reflectance spectrometry in legal medicine.
Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W
2012-06-01
Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.
IceVal DatAssistant: An Interactive, Automated Icing Data Management System
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Wright, William B.
2008-01-01
As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions documented during these tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy for managing this data. To address the situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database. Simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and are linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.
IceVal DatAssistant: An Interactive, Automated Icing Data Management System
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Wright, William B.
2008-01-01
As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions during those tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy to manage the resulting data. To address this situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database; and simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.
Partnerships - Working Together to Build The National Map
,
2004-01-01
Through The National Map, the U.S. Geological Survey (USGS) is working with partners to ensure that current, accurate, and complete base geographic information is available for the Nation. Designed as a network of online digital databases, it provides a consistent geographic data framework for the country and serves as a foundation for integrating, sharing, and using data easily and reliably. It provides public access to high quality geospatial data and information from multiple partners to help inform decisionmaking by resource managers and the public, and to support intergovernmental homeland security and emergency management requirements.
The use of intelligent database systems in acute pancreatitis--a systematic review.
van den Heever, Marc; Mittal, Anubhav; Haydock, Matthew; Windsor, John
2014-01-01
Acute pancreatitis (AP) is a complex disease with multiple aetiological factors, wide ranging severity, and multiple challenges to effective triage and management. Databases, data mining and machine learning algorithms (MLAs), including artificial neural networks (ANNs), may assist by storing and interpreting data from multiple sources, potentially improving clinical decision-making. 1) Identify database technologies used to store AP data, 2) collate and categorise variables stored in AP databases, 3) identify the MLA technologies, including ANNs, used to analyse AP data, and 4) identify clinical and non-clinical benefits and obstacles in establishing a national or international AP database. Comprehensive systematic search of online reference databases. The predetermined inclusion criteria were all papers discussing 1) databases, 2) data mining or 3) MLAs, pertaining to AP, independently assessed by two reviewers with conflicts resolved by a third author. Forty-three papers were included. Three data mining technologies and five ANN methodologies were reported in the literature. There were 187 collected variables identified. ANNs increase accuracy of severity prediction, one study showed ANNs had a sensitivity of 0.89 and specificity of 0.96 six hours after admission--compare APACHE II (cutoff score ≥8) with 0.80 and 0.85 respectively. Problems with databases were incomplete data, lack of clinical data, diagnostic reliability and missing clinical data. This is the first systematic review examining the use of databases, MLAs and ANNs in the management of AP. The clinical benefits these technologies have over current systems and other advantages to adopting them are identified. Copyright © 2013 IAP and EPC. Published by Elsevier B.V. All rights reserved.
Concepts and data model for a co-operative neurovascular database.
Mansmann, U; Taylor, W; Porter, P; Bernarding, J; Jäger, H R; Lasjaunias, P; Terbrugge, K; Meisel, J
2001-08-01
Problems of clinical management of neurovascular diseases are very complex. This is caused by the chronic character of the diseases, a long history of symptoms and diverse treatments. If patients are to benefit from treatment, then treatment decisions have to rely on reliable and accurate knowledge of the natural history of the disease and the various treatments. Recent developments in statistical methodology and experience from electronic patient records are used to establish an information infrastructure based on a centralized register. A protocol to collect data on neurovascular diseases with technical as well as logistical aspects of implementing a database for neurovascular diseases are described. The database is designed as a co-operative tool of audit and research available to co-operating centres. When a database is linked to a systematic patient follow-up, it can be used to study prognosis. Careful analysis of patient outcome is valuable for decision-making.
Space flight risk data collection and analysis project: Risk and reliability database
NASA Technical Reports Server (NTRS)
1994-01-01
The focus of the NASA 'Space Flight Risk Data Collection and Analysis' project was to acquire and evaluate space flight data with the express purpose of establishing a database containing measurements of specific risk assessment - reliability - availability - maintainability - supportability (RRAMS) parameters. The developed comprehensive RRAMS database will support the performance of future NASA and aerospace industry risk and reliability studies. One of the primary goals has been to acquire unprocessed information relating to the reliability and availability of launch vehicles and the subsystems and components thereof from the 45th Space Wing (formerly Eastern Space and Missile Command -ESMC) at Patrick Air Force Base. After evaluating and analyzing this information, it was encoded in terms of parameters pertinent to ascertaining reliability and availability statistics, and then assembled into an appropriate database structure.
Profile of a cell test database and a corresponding reliability database
NASA Technical Reports Server (NTRS)
Brearley, George R.; Klein, Glenn C.
1992-01-01
The development of computerized control, and data retrieval for aerospace cell testing affords an excellent opportunity to incorporate three specific concepts to both manage the test area and to track product performance on a real-time basis. The adoption and incorporation of precepts fostered by this total quality management (TQM) initiative are critical to us for retaining control of our business while substantially reducing the separate quality control inspection activity. Test discrepancies are all 'equally bad' in cell acceptance testing because, for example, we presently do not discriminate between 1 or 25 mV for an overvoltage condition. We must take leadership in classifying such discrepancies in order to expedite their clearance and redirect our resources for prevention activities. The development and use of engineering alerts (or guardbanding) which more closely match our product capabilities and are toleranced tighter than the required customer specification are paramount to managing the test unit in order to remain both quality and cost effective.
Herlin, Christian; Doucet, Jean Charles; Bigorre, Michèle; Khelifa, Hatem Cheikh; Captier, Guillaume
2013-10-01
Treacher Collins syndrome (TCS) is a severe and complex craniofacial malformation affecting the facial skeleton and soft tissues. The palate as well as the external and middle ear are also affected, but his prognosis is mainly related to neonatal airway management. Methods of zygomatico-orbital reconstruction are numerous and currently use primarily autologous bone, lyophilized cartilage, alloplastic implants, or even free flaps. This work developed a reliable "customized" method of zygomatico-orbital bony reconstruction using a generic reference model tailored to each patient. From a standard computed tomography (CT) acquisition, we studied qualitatively and quantitatively the skeleton of four individuals with TCS whose age was between 6 and 20 years. In parallel, we studied 40 controls at the same age to obtain a morphometric database of reference. Surgical simulation was carried out using validated software used in craniofacial surgery. The zygomatic hypoplasia was very important quantitatively and morphologically in all TCS individuals. Orbital involvement was mainly morphological, with volumes comparable to the controls of the same age. The control database was used to create three-dimensional computer models to be used in the manufacture of cutting guides for autologous cranial bone grafts or alloplastic implants perfectly adapted to each patient's morphology. Presurgical simulation was also used to fabricate custom positioning guides permitting a simple and reliable surgical procedure. The use of a virtual database allowed us to design a reliable and reproducible skeletal reconstruction method for this rare and complex syndrome. The use of presurgical simulation tools seem essential in this type of craniofacial malformation to increase the reliability of these uncommon and complex surgical procedures, and to ensure stable results over time. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Disaster Response and Preparedness Application: Emergency Environmental Response Tool (EERT)
NASA Technical Reports Server (NTRS)
Smoot, James; Carr, Hugh; Jester, Keith
2003-01-01
In 2000, the National Aeronautics and Space Administration (NASA) Environmental Office at the John C. Stennis Space Center (SSC) developed an Environmental Geographic Information Systems (EGIS) database. NASA had previously developed a GIS database at SSC to assist in the NASA Environmental Office's management of the Center. This GIS became the basis for the NASA-wide EGIS project, which was proposed after the applicability of the SSC database was demonstrated. Since its completion, the SSC EGIS has aided the Environmental Office with noise pollution modeling, land cover assessment, wetlands delineation, environmental hazards mapping, and critical habitat delineation for protected species. At SSC, facility management and safety officers are responsible for ensuring the physical security of the facilities, staff, and equipment as well as for responding to environmental emergencies, such as accidental releases of hazardous materials. All phases of emergency management (planning, mitigation, preparedness, and response) depend on data reliability and system interoperability from a variety of sources to determine the size and scope of the emergency operation. Because geospatial data are now available for all NASA facilities, it was suggested that this data could be incorporated into a computerized management information program to assist facility managers. The idea was that the information system could improve both the effectiveness and the efficiency of managing and controlling actions associated with disaster, homeland security, and other activities. It was decided to use SSC as a pilot site to demonstrate the efficacy of having a baseline, computerized management information system that ultimately was referred to as the Emergency Environmental Response Tool (EERT).
A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network
NASA Astrophysics Data System (ADS)
Lussana, C.; Ranci, M.; Uboldi, F.
2012-04-01
In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.
Bosma, Laine; Balen, Robert M; Davidson, Erin; Jewesson, Peter J
2003-01-01
The development and integration of a personal digital assistant (PDA)-based point-of-care database into an intravenous resource nurse (IVRN) consultation service for the purposes of consultation management and service characterization are described. The IVRN team provides a consultation service 7 days a week in this 1000-bed tertiary adult care teaching hospital. No simple, reliable method for documenting IVRN patient care activity and facilitating IVRN-initiated patient follow-up evaluation was available. Implementation of a PDA database with exportability of data to statistical analysis software was undertaken in July 2001. A Palm IIIXE PDA was purchased and a three-table, 13-field database was developed using HanDBase software. During the 7-month period of data collection, the IVRN team recorded 4868 consultations for 40 patient care areas. Full analysis of service characteristics was conducted using SPSS 10.0 software. Team members adopted the new technology with few problems, and the authors now can efficiently track and analyze the services provided by their IVRN team.
Integrated management of thesis using clustering method
NASA Astrophysics Data System (ADS)
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
Wind turbine reliability database update.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.
2009-03-01
This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of a reliability database (RDB) methodology to determine applicable reliability data for inclusion in the quantification of the PRA. The RDBmore » method developed during this project seeks to satisfy the requirements of the Data Analysis element of the ASME/ANS Non-LWR PRA standard. The RDB methodology utilizes a relevancy test to examine reliability data and determine whether it is appropriate to include as part of the reliability database for the PRA. The relevancy test compares three component properties to establish the level of similarity to components examined as part of the PRA. These properties include the component function, the component failure modes, and the environment/boundary conditions of the component. The relevancy test is used to gauge the quality of data found in a variety of sources, such as advanced reactor-specific databases, non-advanced reactor nuclear databases, and non-nuclear databases. The RDB also establishes the integration of expert judgment or separate reliability analysis with past reliability data. This paper provides details on the RDB methodology, and includes an example application of the RDB methodology for determining the reliability of the intermediate heat exchanger of a sodium fast reactor. The example explores a variety of reliability data sources, and assesses their applicability for the PRA of interest through the use of the relevancy test.« less
Langley, Shaun A.; Messina, Joseph P.
2011-01-01
The past decade has seen an explosion in the availability of spatial data not only for researchers, but the public alike. As the quantity of data increases, the ability to effectively navigate and understand the data becomes more challenging. Here we detail a conceptual model for a spatially explicit database management system that addresses the issues raised with the growing data management problem. We demonstrate utility with a case study in disease ecology: to develop a multi-scale predictive model of African Trypanosomiasis in Kenya. International collaborations and varying technical expertise necessitate a modular open-source software solution. Finally, we address three recurring problems with data management: scalability, reliability, and security. PMID:21686072
Langley, Shaun A; Messina, Joseph P
2011-01-01
The past decade has seen an explosion in the availability of spatial data not only for researchers, but the public alike. As the quantity of data increases, the ability to effectively navigate and understand the data becomes more challenging. Here we detail a conceptual model for a spatially explicit database management system that addresses the issues raised with the growing data management problem. We demonstrate utility with a case study in disease ecology: to develop a multi-scale predictive model of African Trypanosomiasis in Kenya. International collaborations and varying technical expertise necessitate a modular open-source software solution. Finally, we address three recurring problems with data management: scalability, reliability, and security.
Li, Honglan; Joh, Yoon Sung; Kim, Hyunwoo; Paek, Eunok; Lee, Sang-Won; Hwang, Kyu-Baek
2016-12-22
Proteogenomics is a promising approach for various tasks ranging from gene annotation to cancer research. Databases for proteogenomic searches are often constructed by adding peptide sequences inferred from genomic or transcriptomic evidence to reference protein sequences. Such inflation of databases has potential of identifying novel peptides. However, it also raises concerns on sensitive and reliable peptide identification. Spurious peptides included in target databases may result in underestimated false discovery rate (FDR). On the other hand, inflation of decoy databases could decrease the sensitivity of peptide identification due to the increased number of high-scoring random hits. Although several studies have addressed these issues, widely applicable guidelines for sensitive and reliable proteogenomic search have hardly been available. To systematically evaluate the effect of database inflation in proteogenomic searches, we constructed a variety of real and simulated proteogenomic databases for yeast and human tandem mass spectrometry (MS/MS) data, respectively. Against these databases, we tested two popular database search tools with various approaches to search result validation: the target-decoy search strategy (with and without a refined scoring-metric) and a mixture model-based method. The effect of separate filtering of known and novel peptides was also examined. The results from real and simulated proteogenomic searches confirmed that separate filtering increases the sensitivity and reliability in proteogenomic search. However, no one method consistently identified the largest (or the smallest) number of novel peptides from real proteogenomic searches. We propose to use a set of search result validation methods with separate filtering, for sensitive and reliable identification of peptides in proteogenomic search.
2000-01-01
To address important problems and needed changes in online and retrospective drug utilization review (DUR) programs. Emphasis is placed on reliability of DUR criteria and the shift of traditional retrospective DUR programs toward disease management and health care outcomes. Published literature evaluating the role of online and retrospective DUR programs. Particular attention was given to studies assessing DUR criteria reliability and new interventions with retrospective DUR programs. A literature review was conducted along with an expert summary from the U.S. Pharmacopeia Drug Utilization Review Advisory Panel. Studies have revealed variations in DUR criteria that could be affecting clinical practice and patient care. Appropriate formal methodologies and use of consistent procedures in developing online prospective DUR programs and systems could help resolve these problems. Traditional retrospective DUR is also shifting to incorporate disease management and methodologies from health outcomes and pharmacoeconomics studies. Refinements are needed to improve the reliability and validity of online DUR criteria and to minimize false positive messages. Databases created as a result of DUR efforts have been used in new and innovative ways to incorporate health outcomes data and disease management interventions. Additional outcomes data, combined with quality assurance efforts, should increase the utility of DUR/disease management efforts in evaluating health systems while improving the effectiveness and efficiency of pharmacists' health care interventions.
NASA Astrophysics Data System (ADS)
Cavallari, Francesca; de Gruttola, Michele; Di Guida, Salvatore; Govi, Giacomo; Innocente, Vincenzo; Pfeiffer, Andreas; Pierro, Antonio
2011-12-01
Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are "dropped" by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications.In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.
Development of the updated system of city underground pipelines based on Visual Studio
NASA Astrophysics Data System (ADS)
Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong
2009-10-01
Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.
Thermodynamic properties for arsenic minerals and aqueous species
Nordstrom, D. Kirk; Majzlan, Juraj; Königsberger, Erich; Bowell, Robert J.; Alpers, Charles N.; Jamieson, Heather E.; Nordstrom, D. Kirk; Majzlan, Juraj
2014-01-01
Quantitative geochemical calculations are not possible without thermodynamic databases and considerable advances in the quantity and quality of these databases have been made since the early days of Lewis and Randall (1923), Latimer (1952), and Rossini et al. (1952). Oelkers et al. (2009) wrote, “The creation of thermodynamic databases may be one of the greatest advances in the field of geochemistry of the last century.” Thermodynamic data have been used for basic research needs and for a countless variety of applications in hazardous waste management and policy making (Zhu and Anderson 2002; Nordstrom and Archer 2003; Bethke 2008; Oelkers and Schott 2009). The challenge today is to evaluate thermodynamic data for internal consistency, to reach a better consensus of the most reliable properties, to determine the degree of certainty needed for geochemical modeling, and to agree on priorities for further measurements and evaluations.
Franch Nadal, Josep; Mata Cases, Manel; Mauricio Puente, Dídac
2016-11-01
Type 2 diabetes mellitus is currently the most frequent chronic metabolic disease. In spain, according to the di@bet.es study, its prevalence is 13.8% in the adult population (although it is undiagnosed in 6%). The main risk factor for type 2 diabetes mellitus is obesity. The severity of type 2 diabetes mellitus is determined not only by the presence of hyperglycaemia, but also by the coexistence of other risk factors such as hypertension or dyslipidaemia, which are often associated with the disease. Its impact on the presence of chronic diabetic complications varies. While hyperglycaemia mainly influences the presence of microvascular complications, hypertension, dyslipidaemia and smoking play a greater role in macrovascular atherosclerotic disease. One of the most powerful ways to study the epidemiology of the disease is through the use of large databases that analyse the situation in the routine clinical management of huge numbers of patients. Recently, the data provided by the e-Management Project, based on the SIDIAP database, have allowed updating of many data on the health care of diabetic persons in Catalonia. This not only allows determination of the epidemiology of the disease but is also a magnificent starting point for the design of future studies that will provide answers to more questions. However, the use of large databases is not free of certain problems, especially those concerning the reliability of registries. This article analyses some of the data obtained by the e-Management study and other spanish epidemiological studies of equal importance. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
NASA's computer science research program
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1983-01-01
Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settlemyer, S.R.
1991-09-01
The Nuclear Weapons Management System combines the strengths of an expert system with the flexibility of a database management system to assist the Weapons Officer, Security Officer, and the Personnel Reliability Program Officer in the performance of administrative duties associated with the nuclear weapons programs in the United States Navy. This thesis examines the need for, and ultimately the design of, a system that will assist the Security Officer in administrative duties associated with the Shipboard Self Defense Force. This system, designed and coded utilizing dBASE IV, can be implemented as a stand alone system. Furthermore, it interfaces with themore » expert system submodule that handles the PRP screening process.« less
Smith, W Brad; Cuenca Lara, Rubí Angélica; Delgado Caballero, Carina Edith; Godínez Valdivia, Carlos Isaías; Kapron, Joseph S; Leyva Reyes, Juan Carlos; Meneses Tovar, Carmen Lourdes; Miles, Patrick D; Oswalt, Sonja N; Ramírez Salgado, Mayra; Song, Xilong Alex; Stinson, Graham; Villela Gaytán, Sergio Armando
2018-05-21
Forests cannot be managed sustainably without reliable data to inform decisions. National Forest Inventories (NFI) tend to report national statistics, with sub-national stratification based on domestic ecological classification systems. It is becoming increasingly important to be able to report statistics on ecosystems that span international borders, as global change and globalization expand stakeholders' spheres of concern. The state of a transnational ecosystem can only be properly assessed by examining the entire ecosystem. In global forest resource assessments, it may be useful to break national statistics down by ecosystem, especially for large countries. The Inventory and Monitoring Working Group (IMWG) of the North American Forest Commission (NAFC) has begun developing a harmonized North American Forest Database (NAFD) for managing forest inventory data, enabling consistent, continental-scale forest assessment supporting ecosystem-level reporting and relational queries. The first iteration of the database contains data describing 1.9 billion ha, including 677.5 million ha of forest. Data harmonization is made challenging by the existence of definitions and methodologies tailored to suit national circumstances, emerging from each country's professional forestry development. This paper reports the methods used to synchronize three national forest inventories, starting with a small suite of variables and attributes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Hoon Shin; Young Wook Lee; Young Ho Cho
2006-07-01
In the nuclear energy field, there are so many difficult things that even people who are working in this field are not much familiar with, such as, Dose evaluation, Dose management, etc. Thus, so many efforts have been done to achieve the knowledge and data for understanding. Although some data had been achieved, the applications of these data to necessary cases were more difficult job. Moreover, the type of Dose evaluation program until now was 'Console type' which is not easy enough to use for the beginners. To overcome the above causes of difficulties, the window-based integrated program and databasemore » management were developed in our research lab. The program, called as INSREC, consists of four sub-programs as follow; INSREC-NOM, INSREC-ACT, INSREC-MED, and INSREC-EXI. In ICONE 11 conference, INSREC-program(ICONE-36203) which can evaluates on/off-site dose of nuclear power plant in normal operation was introduced. Upgraded INSREC-program which will be presented in ICONE 14 conference has three additional codes comparing with pre-presented INSREC-program. Those subprograms can evaluate on/off-site Dose of nuclear power plant in accident cases. And they also have the functions of 'Dose evaluation and management' in the hospital and provide the 'Expert system' based on knowledge related to nuclear energy/radiation field. The INSREC-NOM, one of subprograms, is composed of 'Source term evaluation program', 'Atmospheric diffusion factor evaluation program', 'Off-site dose evaluation program', and 'On-site database program'. The INSREC-ACT is composed of 'On/Off-site dose evaluation program' and 'Result analysis program' and the INSREC-MED is composed of 'Workers/patients dose database program' and 'Dose evaluation program for treatment room'. The final one, INSREC-EXI, is composed of 'Database searching program based on artificial intelligence', 'Instruction program,' and 'FAQ/Q and A boards'. Each program was developed by using of Visual C++, Microsoft Access mainly. To verify the reliability, some suitable programs were selected such as AZAP and Stardose programs for the comparison. The AZAP program was selected for the on/off-site dose evaluation during the normal operation of nuclear reactor and Stardose program was used for the on/off-site dose evaluation in accident. The MCNP code was used for the dose evaluation and management in the hospital. Each comparison result was acceptable in errors analysis. According to the reliable verification results, it was concluded that INSREC program had an acceptable reliability for dose calculation and could give many proper dada for the sites. To serve the INSREC to people, the proper server system was constructed. We gave chances for the people (user) to utilize the INSREC through network connected to server system. The reactions were pretty much good enough to be satisfied. For the future work, many efforts will be given to improve the better user-interface and more necessary data will be provided to more people through database supplement and management. (authors)« less
Development and construct validity of the Classroom Strategies Scale-Observer Form.
Reddy, Linda A; Fabiano, Gregory; Dudek, Christopher M; Hsu, Louis
2013-12-01
Research on progress monitoring has almost exclusively focused on student behavior and not on teacher practices. This article presents the development and validation of a new teacher observational assessment (Classroom Strategies Scale) of classroom instructional and behavioral management practices. The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. The Classroom Strategies Scale (CSS) evidenced overall good reliability estimates including internal consistency, interrater reliability, test-retest reliability, and freedom from item bias on important teacher demographics (age, educational degree, years of teaching experience). Confirmatory factor analyses (CFAs) of CSS data from 317 classrooms were carried out to assess the level of empirical support for (a) a 4 first-order factor theory concerning teachers' instructional practices, and (b) a 4 first-order factor theory concerning teachers' behavior management practice. Several fit indices indicated acceptable fit of the (a) and (b) CFA models to the data, as well as acceptable fit of less parsimonious alternative CFA models that included 1 or 2 second-order factors. Information-theory-based indices generally suggested that the (a) and (b) CFA models fit better than some more parsimonious alternative CFA models that included constraints on relations of first-order factors. Overall, CFA first-order and higher order factor results support the CSS-Observer Total, Composite, and subscales. Suggestions for future measurement development efforts are outlined. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Reliability of the Defense Commissary Agency Personnel Property Database.
2000-02-18
Departments’ personal property databases. The tests were designed to validate the personal property databases. This report is the second in a series of...with the completeness of its data , and key data elements were not reliable for estimating the historical costs of real property for the Military...values of greater than $100,000. However, some of the Military Departments had problems with the completeness of its data , and key data elements
Intelligent databases assist transparent and sound economic valuation of ecosystem services.
Villa, Ferdinando; Ceroni, Marta; Krivov, Sergey
2007-06-01
Assessment and economic valuation of services provided by ecosystems to humans has become a crucial phase in environmental management and policy-making. As primary valuation studies are out of the reach of many institutions, secondary valuation or benefit transfer, where the results of previous studies are transferred to the geographical, environmental, social, and economic context of interest, is becoming increasingly common. This has brought to light the importance of environmental valuation databases, which provide reliable valuation data to inform secondary valuation with enough detail to enable the transfer of values across contexts. This paper describes the role of next-generation, intelligent databases (IDBs) in assisting the activity of valuation. Such databases employ artificial intelligence to inform the transfer of values across contexts, enforcing comparability of values and allowing users to generate custom valuation portfolios that synthesize previous studies and provide aggregated value estimates to use as a base for secondary valuation. After a general introduction, we introduce the Ecosystem Services Database, the first IDB for environmental valuation to be made available to the public, describe its functionalities and the lessons learned from its usage, and outline the remaining needs and expected future developments in the field.
NASA Astrophysics Data System (ADS)
Perry, William G.
2006-04-01
One goal of database mining is to draw unique and valid perspectives from multiple data sources. Insights that are fashioned from closely-held data stores are likely to possess a high degree of reliability. The degree of information assurance comes into question, however, when external databases are accessed, combined and analyzed to form new perspectives. ISO/IEC 17799, Information technology-Security techniques-Code of practice for information security management, can be used to establish a higher level of information assurance among disparate entities using data mining in the defense, homeland security, commercial and other civilian/commercial domains. Organizations that meet ISO/IEC information security standards have identified and assessed risks, threats and vulnerabilities and have taken significant proactive steps to meet their unique security requirements. The ISO standards address twelve domains: risk assessment and treatment, security policy, organization of information security, asset management, human resources security, physical and environmental security, communications and operations management, access control, information systems acquisition, development and maintenance, information security incident management and business continuity management and compliance. Analysts can be relatively confident that if organizations are ISO 17799 compliant, a high degree of information assurance is likely to be a characteristic of the data sets being used. The reverse may be true. Extracting, fusing and drawing conclusions based upon databases with a low degree of information assurance may be wrought with all of the hazards that come from knowingly using bad data to make decisions. Using ISO/IEC 17799 as a baseline for information assurance can help mitigate these risks.
Data management in clinical research: An overview
Krishnankutty, Binny; Bellary, Shantala; Kumar, Naveen B.R.; Moodahadu, Latha S.
2012-01-01
Clinical Data Management (CDM) is a critical phase in clinical research, which leads to generation of high-quality, reliable, and statistically sound data from clinical trials. This helps to produce a drastic reduction in time from drug development to marketing. Team members of CDM are actively involved in all stages of clinical trial right from inception to completion. They should have adequate process knowledge that helps maintain the quality standards of CDM processes. Various procedures in CDM including Case Report Form (CRF) designing, CRF annotation, database designing, data-entry, data validation, discrepancy management, medical coding, data extraction, and database locking are assessed for quality at regular intervals during a trial. In the present scenario, there is an increased demand to improve the CDM standards to meet the regulatory requirements and stay ahead of the competition by means of faster commercialization of product. With the implementation of regulatory compliant data management tools, CDM team can meet these demands. Additionally, it is becoming mandatory for companies to submit the data electronically. CDM professionals should meet appropriate expectations and set standards for data quality and also have a drive to adapt to the rapidly changing technology. This article highlights the processes involved and provides the reader an overview of the tools and standards adopted as well as the roles and responsibilities in CDM. PMID:22529469
Patient handover in orthopaedics, improving safety using Information Technology.
Pearkes, Tim
2015-01-01
Good inpatient handover ensures patient safety and continuity of care. An adjunct to this is the patient list which is routinely managed by junior doctors. These lists are routinely created and managed within Microsoft Excel or Word. Following the merger of two orthopaedic departments into a single service in a new hospital, it was felt that a number of safety issues within the handover process needed to be addressed. This quality improvement project addressed these issues through the creation and implementation of a new patient database which spanned the department, allowing trouble free, safe, and comprehensive handover. Feedback demonstrated an improved user experience, greater reliability, continuity within the lists and a subsequent improvement in patient safety.
The Design of Integrated Information System for High Voltage Metering Lab
NASA Astrophysics Data System (ADS)
Ma, Yan; Yang, Yi; Xu, Guangke; Gu, Chao; Zou, Lida; Yang, Feng
2018-01-01
With the development of smart grid, intelligent and informatization management of high-voltage metering lab become increasingly urgent. In the paper we design an integrated information system, which automates the whole transactions from accepting instruments, make experiments, generating report, report signature to instrument claims. Through creating database for all the calibrated instruments, using two-dimensional code, integrating report templates in advance, establishing bookmarks and online transmission of electronical signatures, our manual procedures reduce largely. These techniques simplify the complex process of account management and report transmission. After more than a year of operation, our work efficiency improves about forty percent averagely, and its accuracy rate and data reliability are much higher as well.
NASA Astrophysics Data System (ADS)
Knosp, B.; Gangl, M.; Hristova-Veleva, S. M.; Kim, R. M.; Li, P.; Turk, J.; Vu, Q. A.
2015-12-01
The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data and model forecast related to tropical cyclones. The TCIS has been running a near-real time (NRT) data portal during North Atlantic hurricane season that typically runs from June through October each year, since 2010. Data collected by the TCIS varies by type, format, contents, and frequency and is served to the user in two ways: (1) as image overlays on a virtual globe and (2) as derived output from a suite of analysis tools. In order to support these two functions, the data must be collected and then made searchable by criteria such as date, mission, product, pressure level, and geospatial region. Creating a database architecture that is flexible enough to manage, intelligently interrogate, and ultimately present this disparate data to the user in a meaningful way has been the primary challenge. The database solution for the TCIS has been to use a hybrid MySQL + Solr implementation. After testing other relational database and NoSQL solutions, such as PostgreSQL and MongoDB respectively, this solution has given the TCIS the best offerings in terms of query speed and result reliability. This database solution also supports the challenging (and memory overwhelming) geospatial queries that are necessary to support analysis tools requested by users. Though hardly new technologies on their own, our implementation of MySQL + Solr had to be customized and tuned to be able to accurately store, index, and search the TCIS data holdings. In this presentation, we will discuss how we arrived on our MySQL + Solr database architecture, why it offers us the most consistent fast and reliable results, and how it supports our front end so that we can offer users a look into our "big data" holdings.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
A plan for the North American Bat Monitoring Program (NABat)
Loeb, Susan C.; Rodhouse, Thomas J.; Ellison, Laura E.; Lausen, Cori L.; Reichard, Jonathan D.; Irvine, Kathryn M.; Ingersoll, Thomas E.; Coleman, Jeremy; Thogmartin, Wayne E.; Sauer, John R.; Francis, Charles M.; Bayless, Mylea L.; Stanley, Thomas R.; Johnson, Douglas H.
2015-01-01
The purpose of the North American Bat Monitoring Program (NABat) is to create a continent-wide program to monitor bats at local to rangewide scales that will provide reliable data to promote effective conservation decisionmaking and the long-term viability of bat populations across the continent. This is an international, multiagency program. Four approaches will be used to gather monitoring data to assess changes in bat distributions and abundances: winter hibernaculum counts, maternity colony counts, mobile acoustic surveys along road transects, and acoustic surveys at stationary points. These monitoring approaches are described along with methods for identifying species recorded by acoustic detectors. Other chapters describe the sampling design, the database management system (Bat Population Database), and statistical approaches that can be used to analyze data collected through this program.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
Reliability of intracerebral hemorrhage classification systems: A systematic review.
Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm
2016-08-01
Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.
Installed Base Registration of Decentralised Solar Panels with Applications in Crisis Management
NASA Astrophysics Data System (ADS)
Aarsen, R.; Janssen, M.; Ramkisoen, M.; Biljecki, F.; Quak, W.; Verbree, E.
2015-08-01
In case of a calamity in the Netherlands - e.g. a dike breach - parts of the nationwide electric network can fall out. In these occasions it would be useful if decentralised energy sources of the Smart Grid would contribute to balance out the fluctuations of the energy network. Decentralised energy sources include: solar energy, wind energy, combined heat and power, and biogas. In this manner, parts of the built environment - e.g. hospitals - that are in need of a continuous power flow, could be secured of this power. When a calamity happens, information about the Smart Grid is necessary to control the crisis and to ensure a shared view on the energy networks for both the crisis managers and network operators. The current situation of publishing, storing and sharing data of solar energy has been shown a lack of reliability about the current number, physical location, and capacity of installed decentralised photovoltaic (PV) panels in the Netherlands. This study focuses on decentralised solar energy in the form of electricity via PV panels in the Netherlands and addresses this challenge by proposing a new, reliable and up-to-date database. The study reveals the requirements for a registration of the installed base of PV panels in the Netherlands. This new database should serve as a replenishment for the current national voluntary registration, called Production Installation Register of Energy Data Services Netherland (EDSN-PIR), of installed decentralised PV panel installations in the Smart Grid, and provide important information in case of a calamity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh
Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Faultmore » Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.« less
United States Army Medical Materiel Development Activity: 1997 Annual Report.
1997-01-01
business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was
Whitman, Richard L.; Nevers, Meredith B.
2004-01-01
Monitoring beaches for recreational water quality is becoming more common, but few sampling designs or policy approaches have evaluated the efficacy of monitoring programs. The authors intensively sampled water for E. coli (N=1770) at 63rd Street Beach, Chicago for 6 months in 2000 in order to (1) characterize spatial-temporal trends, (2) determine between and within transect variation, and (3) estimate sample size requirements and determine sampling reliability.E. coli counts were highly variable within and between sampling sites but spatially and diurnally autocorrelated. Variation in counts decreased with water depth and time of day. Required number of samples was high for 70% precision around the critical closure level (i.e., 6 within or 24 between transect replicates). Since spatial replication may be cost prohibitive, composite sampling is an alternative once sources of error have been well defined. The results suggest that beach monitoring programs may be requiring too few samples to fulfill management objectives desired. As the recreational water quality national database is developed, it is important that sampling strategies are empirically derived from a thorough understanding of the sources of variation and the reliability of collected data. Greater monitoring efficacy will yield better policy decisions, risk assessments, programmatic goals, and future usefulness of the information.
López, Yosvany; Nakai, Kenta; Patil, Ashwini
2015-01-01
HitPredict is a consolidated resource of experimentally identified, physical protein-protein interactions with confidence scores to indicate their reliability. The study of genes and their inter-relationships using methods such as network and pathway analysis requires high quality protein-protein interaction information. Extracting reliable interactions from most of the existing databases is challenging because they either contain only a subset of the available interactions, or a mixture of physical, genetic and predicted interactions. Automated integration of interactions is further complicated by varying levels of accuracy of database content and lack of adherence to standard formats. To address these issues, the latest version of HitPredict provides a manually curated dataset of 398 696 physical associations between 70 808 proteins from 105 species. Manual confirmation was used to resolve all issues encountered during data integration. For improved reliability assessment, this version combines a new score derived from the experimental information of the interactions with the original score based on the features of the interacting proteins. The combined interaction score performs better than either of the individual scores in HitPredict as well as the reliability score of another similar database. HitPredict provides a web interface to search proteins and visualize their interactions, and the data can be downloaded for offline analysis. Data usability has been enhanced by mapping protein identifiers across multiple reference databases. Thus, the latest version of HitPredict provides a significantly larger, more reliable and usable dataset of protein-protein interactions from several species for the study of gene groups. Database URL: http://hintdb.hgc.jp/htp. © The Author(s) 2015. Published by Oxford University Press.
Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M
2018-06-01
Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Rhinoplasty perioperative database using a personal digital assistant.
Kotler, Howard S
2004-01-01
To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.
NASA Technical Reports Server (NTRS)
Sobue, Shin-ichi; Yoshida, Fumiyoshi; Ochiai, Osamu
1996-01-01
NASDA's new Advanced Earth Observing Satellite (ADEOS) is scheduled for launch in August, 1996. ADEOS carries 8 sensors to observe earth environmental phenomena and sends their data to NASDA, NASA, and other foreign ground stations around the world. The downlink data bit rate for ADEOS is 126 MB/s and the total volume of data is about 100 GB per day. To archive and manage such a large quantity of data with high reliability and easy accessibility it was necessary to develop a new mass storage system with a catalogue information database using advanced database management technology. The data will be archived and maintained in the Master Data Storage Subsystem (MDSS) which is one subsystem in NASDA's new Earth Observation data and Information System (EOIS). The MDSS is based on a SONY ID1 digital tape robotics system. This paper provides an overview of the EOIS system, with a focus on the Master Data Storage Subsystem and the NASDA Earth Observation Center (EOC) archive policy for earth observation satellite data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garnier, Ch.; Mailhe, P.; Sontheimer, F.
2007-07-01
Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less
An Evaluation Method of Equipment Reliability Configuration Management
NASA Astrophysics Data System (ADS)
Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan
2018-01-01
At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.
Crane Cell Testing Support of NASA/Goddard Space Flight Center: An Update
NASA Technical Reports Server (NTRS)
Strawn, Mike; David, Jerry; Rao, Gopalakrishna M.
2001-01-01
The objectives presented in this viewgraph presentation include: 1) Verify the quality and reliability of aerospace battery cells and batteries for NASA flight programs; 2) Disseminate the data to develop a plan for in-orbit battery management and to design a cell/battery for future NASA spacecraft; and 3) Establish a cell test data base for rechargeable cell/batteries. In summary: quality EPT Ni-H2, EPT Super NiCd and SAFT NiCd cells have been demonstrated for aerospace applications; the data has been provided to NASA Centers and other agencies for their use and application; developed plan and used in NASA in-orbit battery management. Database on rechargeable cell/batteries is now available for customer use.
Mass and Reliability Source (MaRS) Database
NASA Technical Reports Server (NTRS)
Valdenegro, Wladimir
2017-01-01
The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.
NASA Astrophysics Data System (ADS)
Bashev, A.
2012-04-01
Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.
First year progress report on the development of the Texas flexible pavement database.
DOT National Transportation Integrated Search
2008-01-01
Comprehensive and reliable databases are essential for the development, validation, and calibration of any pavement : design and rehabilitation system. These databases should include material properties, pavement structural : characteristics, highway...
NASA Technical Reports Server (NTRS)
Singh, M.
1999-01-01
Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.
Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey
2017-01-01
The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...
TAMEE: data management and analysis for tissue microarrays.
Thallinger, Gerhard G; Baumgartner, Kerstin; Pirklbauer, Martin; Uray, Martina; Pauritsch, Elke; Mehes, Gabor; Buck, Charles R; Zatloukal, Kurt; Trajanoski, Zlatko
2007-03-07
With the introduction of tissue microarrays (TMAs) researchers can investigate gene and protein expression in tissues on a high-throughput scale. TMAs generate a wealth of data calling for extended, high level data management. Enhanced data analysis and systematic data management are required for traceability and reproducibility of experiments and provision of results in a timely and reliable fashion. Robust and scalable applications have to be utilized, which allow secure data access, manipulation and evaluation for researchers from different laboratories. TAMEE (Tissue Array Management and Evaluation Environment) is a web-based database application for the management and analysis of data resulting from the production and application of TMAs. It facilitates storage of production and experimental parameters, of images generated throughout the TMA workflow, and of results from core evaluation. Database content consistency is achieved using structured classifications of parameters. This allows the extraction of high quality results for subsequent biologically-relevant data analyses. Tissue cores in the images of stained tissue sections are automatically located and extracted and can be evaluated using a set of predefined analysis algorithms. Additional evaluation algorithms can be easily integrated into the application via a plug-in interface. Downstream analysis of results is facilitated via a flexible query generator. We have developed an integrated system tailored to the specific needs of research projects using high density TMAs. It covers the complete workflow of TMA production, experimental use and subsequent analysis. The system is freely available for academic and non-profit institutions from http://genome.tugraz.at/Software/TAMEE.
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Big data and high-performance analytics in structural health monitoring for bridge management
NASA Astrophysics Data System (ADS)
Alampalli, Sharada; Alampalli, Sandeep; Ettouney, Mohammed
2016-04-01
Structural Health Monitoring (SHM) can be a vital tool for effective bridge management. Combining large data sets from multiple sources to create a data-driven decision-making framework is crucial for the success of SHM. This paper presents a big data analytics framework that combines multiple data sets correlated with functional relatedness to convert data into actionable information that empowers risk-based decision-making. The integrated data environment incorporates near real-time streams of semi-structured data from remote sensors, historical visual inspection data, and observations from structural analysis models to monitor, assess, and manage risks associated with the aging bridge inventories. Accelerated processing of dataset is made possible by four technologies: cloud computing, relational database processing, support from NOSQL database, and in-memory analytics. The framework is being validated on a railroad corridor that can be subjected to multiple hazards. The framework enables to compute reliability indices for critical bridge components and individual bridge spans. In addition, framework includes a risk-based decision-making process that enumerate costs and consequences of poor bridge performance at span- and network-levels when rail networks are exposed to natural hazard events such as floods and earthquakes. Big data and high-performance analytics enable insights to assist bridge owners to address problems faster.
Vanhoorne, Bart; Decock, Wim; Vranken, Sofie; Lanssens, Thomas; Dekeyzer, Stefanie; Verfaille, Kevin; Horton, Tammy; Kroh, Andreas; Hernandez, Francisco; Mees, Jan
2018-01-01
The World Register of Marine Species (WoRMS) celebrated its 10th anniversary in 2017. WoRMS is a unique database: there is no comparable global database for marine species, which is driven by a large, global expert community, is supported by a Data Management Team and can rely on a permanent host institute, dedicated to keeping WoRMS online. Over the past ten years, the content of WoRMS has grown steadily, and the system currently contains more than 242,000 accepted marine species. WoRMS has not yet reached completeness: approximately 2,000 newly described species per year are added, and editors also enter the remaining missing older names–both accepted and unaccepted–an effort amounting to approximately 20,000 taxon name additions per year. WoRMS is used extensively, through different channels, indicating that it is recognized as a high-quality database on marine species information. It is updated on a daily basis by its Editorial Board, which currently consists of 490 taxonomic and thematic experts located around the world. Owing to its unique qualities, WoRMS has become a partner in many large-scale initiatives including OBIS, LifeWatch and the Catalogue of Life, where it is recognized as a high-quality and reliable source of information for marine taxonomy. PMID:29624577
Mobile Location-Based Services for Trusted Information in Disaster Management
NASA Astrophysics Data System (ADS)
Ragia, Lemonia; Deriaz, Michel; Seigneur, Jean-Marc
The goal of the present chapter is to provide location-based services for disaster management. The application involves services related to the safety of the people due to an unexpected event. The current prototype is implemented for a specific issue of disaster management which is road traffic control. The users can ask requests on cell phones or via Internet to the system and get an answer in a display or in textual form. The data are in a central database and every user can input data via virtual tags. The system is based on spatial messages which can be sent from any user to any other in a certain distance. In this way all the users and not a separate source provide the necessary information for a dangerous situation. To avoid any contamination problems we use trust security to check the input to the system and a trust engine model to provide information with a considerable reliability.
Applications of GIS and database technologies to manage a Karst Feature Database
Gao, Y.; Tipping, R.G.; Alexander, E.C.
2006-01-01
This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.
Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline
Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.
2016-09-28
A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.
Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.
A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.
Development and application of basis database for materials life cycle assessment in china
NASA Astrophysics Data System (ADS)
Li, Xiaoqing; Gong, Xianzheng; Liu, Yu
2017-03-01
As the data intensive method, high quality environmental burden data is an important premise of carrying out materials life cycle assessment (MLCA), and the reliability of data directly influences the reliability of the assessment results and its application performance. Therefore, building Chinese MLCA database is the basic data needs and technical supports for carrying out and improving LCA practice. Firstly, some new progress on database which related to materials life cycle assessment research and development are introduced. Secondly, according to requirement of ISO 14040 series standards, the database framework and main datasets of the materials life cycle assessment are studied. Thirdly, MLCA data platform based on big data is developed. Finally, the future research works were proposed and discussed.
Designing Reliable Cohorts of Cardiac Patients across MIMIC and eICU
Chronaki, Catherine; Shahin, Abdullah; Mark, Roger
2016-01-01
The design of the patient cohort is an essential and fundamental part of any clinical patient study. Knowledge of the Electronic Health Records, underlying Database Management System, and the relevant clinical workflows are central to an effective cohort design. However, with technical, semantic, and organizational interoperability limitations, the database queries associated with a patient cohort may need to be reconfigured in every participating site. i2b2 and SHRINE advance the notion of patient cohorts as first class objects to be shared, aggregated, and recruited for research purposes across clinical sites. This paper reports on initial efforts to assess the integration of Medical Information Mart for Intensive Care (MIMIC) and Philips eICU, two large-scale anonymized intensive care unit (ICU) databases, using standard terminologies, i.e. LOINC, ICD9-CM and SNOMED-CT. Focus of this work is lab and microbiology observations and key demographics for patients with a primary cardiovascular ICD9-CM diagnosis. Results and discussion reflecting on reference core terminology standards, offer insights on efforts to combine detailed intensive care data from multiple ICUs worldwide. PMID:27774488
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
Khorrami, F; Ahmadi, M; Alizadeh, A; Roozbeh, N; Mohseni, S
2015-01-01
Introduction: Given the ever-increasing importance and value of information, providing the management with a reliable information system, which can facilitate decision-making regarding planning, organization and control, is vitally important. This study aimed to analyze and evaluate the information needs of medical equipment offices. Methods: This descriptive applied cross-sectional study was carried out in 2010. The population of the study included the managers of statistic and medical records at the offices of vice-chancellor for treatment in 39 medical universities in Iran. Data were collected by using structured questioners. With regard to different kinds of designing information systems, sampling was done by two methods, BSP (based on processes of job description) and CSF method (based on critical success factors). The data were analyzed by SPSS-16. Results: Our study showed that 41% of information needs were found to be critical success factors of managers of office. The first priority of managers was "the number of bed and bed occupancy in hospitals". Of 29 identified information needs, 62% were initial information needs of managers (from the viewpoints of managers). Of all, 4% of the information needs were obtained through the form, 14% through both the form and database, 11% through the web site, and 71% had no sources (forms, databases, web site). Conclusion: Since 71% of the information needs of medical equipment offices managers had no information sources, the development of information system in these offices seems to be necessary. Despite the important role of users in designing the information systems (identifying 62% of information needs), other scientific methods is also needed to be utilized in designing the information systems.
Khorrami, F; Ahmadi, M; Alizadeh, A; Roozbeh, N; Mohseni, S
2015-01-01
Introduction: Given the ever-increasing importance and value of information, providing the management with a reliable information system, which can facilitate decision-making regarding planning, organization and control, is vitally important. This study aimed to analyze and evaluate the information needs of medical equipment offices. Methods: This descriptive applied cross-sectional study was carried out in 2010. The population of the study included the managers of statistic and medical records at the offices of vice-chancellor for treatment in 39 medical universities in Iran. Data were collected by using structured questioners. With regard to different kinds of designing information systems, sampling was done by two methods, BSP (based on processes of job description) and CSF method (based on critical success factors). The data were analyzed by SPSS-16. Results: Our study showed that 41% of information needs were found to be critical success factors of managers of office. The first priority of managers was “the number of bed and bed occupancy in hospitals”. Of 29 identified information needs, 62% were initial information needs of managers (from the viewpoints of managers). Of all, 4% of the information needs were obtained through the form, 14% through both the form and database, 11% through the web site, and 71% had no sources (forms, databases, web site). Conclusion: Since 71% of the information needs of medical equipment offices managers had no information sources, the development of information system in these offices seems to be necessary. Despite the important role of users in designing the information systems (identifying 62% of information needs), other scientific methods is also needed to be utilized in designing the information systems. PMID:28255389
Thermal Management and Packaging Reliability (Text Version) |
Transportation Research | NREL Thermal Management and Packaging Reliability (Text Version ) Thermal Management and Packaging Reliability (Text Version) Learn how NREL's thermal management and ;Boosting Thermal Management & Reliability of Vehicle Power Electronics." Better power electronics
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-23
... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, CT; Notice of Negative... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, Connecticut (The Hartford, Corporate/EIT/CTO Database Management Division). The negative determination was issued on August 19, 2011...
CEO Sites Mission Management System (SMMS)
NASA Technical Reports Server (NTRS)
Trenchard, Mike
2014-01-01
Late in fiscal year 2011, the Crew Earth Observations (CEO) team was tasked to upgrade its science site database management tool, which at the time was integrated with the Automated Mission Planning System (AMPS) originally developed for Earth Observations mission planning in the 1980s. Although AMPS had been adapted and was reliably used by CEO for International Space Station (ISS) payload operations support, the database structure was dated, and the compiler required for modifications would not be supported in the Windows 7 64-bit operating system scheduled for implementation the following year. The Sites Mission Management System (SMMS) is now the tool used by CEO to manage a heritage Structured Query Language (SQL) database of more than 2,000 records for Earth science sites. SMMS is a carefully designed and crafted in-house software package with complete and detailed help files available for the user and meticulous internal documentation for future modifications. It was delivered in February 2012 for test and evaluation. Following acceptance, it was implemented for CEO mission operations support in April 2012. The database spans the period from the earliest systematic requests for astronaut photography during the shuttle era to current ISS mission support of the CEO science payload. Besides logging basic image information (site names, locations, broad application categories, and mission requests), the upgraded database management tool now tracks dates of creation, modification, and activation; imagery acquired in response to requests; the status and location of ancillary site information; and affiliations with studies, their sponsors, and collaborators. SMMS was designed to facilitate overall mission planning in terms of site selection and activation and provide the necessary site parameters for the Satellite Tool Kit (STK) Integrated Message Production List Editor (SIMPLE), which is used by CEO operations to perform daily ISS mission planning. The CEO team uses the SMMS for three general functions - database queries of content and status, individual site creation and updates, and mission planning. The CEO administrator of the science site database is able to create or modify the content of sites and activate or deactivate them based on the requirements of the sponsors. The administrator supports and implements ISS mission planning by assembling, reporting, and activating mission-specific site selections for management; deactivating sites as requirements are met; and creating new sites, such as International Charter sites for disasters, as circumstances warrant. In addition to the above CEO internal uses, when site planning for a specific ISS mission is complete and approved, the SMMS can produce and export those essential site database elements for the mission into XML format for use by onboard Earth-location systems, such as Worldmap. The design, development, and implementation of the SMMS resulted in a superior database management system for CEO science sites by focusing on the functions and applications of the database alone instead of integrating the database with the multipurpose configuration of the AMPS. Unlike the AMPS, it can function and be modified within the existing Windows 7 environment. The functions and applications of the SMMS were expanded to accommodate more database elements, report products, and a streamlined interface for data entry and review. A particularly elegant enhancement in data entry was the integration of the Google Earth application for the visual display and definition of site coordinates for site areas defined by multiple coordinates. Transfer between the SMMS and Google Earth is accomplished with a Keyhole Markup Language (KML) expression of geographic data (see figures 3 and 4). Site coordinates may be entered into the SMMS panel directly for display in Google Earth, or the coordinates may be defined on the Google Earth display as a mouse-controlled polygonal definition and transferred back into the SMMS as KML input. This significantly reduces the possibility of errors in coordinate entries and provides visualization of the scale of the site being defined. CEO now has a powerful tool for managing and defining sites on the Earth's surface for both targets of astronaut photography or other onboard remote sensing systems. It can also record and track results by sponsor, collaborator, or type of study.
Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2017-07-11
The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.
2014-01-01
Protein biomarkers offer major benefits for diagnosis and monitoring of disease processes. Recent advances in protein mass spectrometry make it feasible to use this very sensitive technology to detect and quantify proteins in blood. To explore the potential of blood biomarkers, we conducted a thorough review to evaluate the reliability of data in the literature and to determine the spectrum of proteins reported to exist in blood with a goal of creating a Federated Database of Blood Proteins (FDBP). A unique feature of our approach is the use of a SQL database for all of the peptide data; the power of the SQL database combined with standard informatic algorithms such as BLAST and the statistical analysis system (SAS) allowed the rapid annotation and analysis of the database without the need to create special programs to manage the data. Our mathematical analysis and review shows that in addition to the usual secreted proteins found in blood, there are many reports of intracellular proteins and good agreement on transcription factors, DNA remodelling factors in addition to cellular receptors and their signal transduction enzymes. Overall, we have catalogued about 12,130 proteins identified by at least one unique peptide, and of these 3858 have 3 or more peptide correlations. The FDBP with annotations should facilitate testing blood for specific disease biomarkers. PMID:24476026
Supporting reputation based trust management enhancing security layer for cloud service models
NASA Astrophysics Data System (ADS)
Karthiga, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.
2017-11-01
In the existing system trust between cloud providers and consumers is inadequate to establish the service level agreement though the consumer’s response is good cause to assess the overall reliability of cloud services. Investigators recognized the significance of trust can be managed and security can be provided based on feedback collected from participant. In this work a face recognition system that helps to identify the user effectively. So we use an image comparison algorithm where the user face is captured during registration time and get stored in database. With that original image we compare it with the sample image that is already stored in database. If both the image get matched then the users are identified effectively. When the confidential data are subcontracted to the cloud, data holders will become worried about the confidentiality of their data in the cloud. Encrypting the data before subcontracting has been regarded as the important resources of keeping user data privacy beside the cloud server. So in order to keep the data secure we use an AES algorithm. Symmetric-key algorithms practice a shared key concept, keeping data secret requires keeping this key secret. So only the user with private key can decrypt data.
Monitoring outcomes with relational databases: does it improve quality of care?
Clemmer, Terry P
2004-12-01
There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.
TWRS technical baseline database manager definition document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acree, C.D.
1997-08-13
This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.
Gething, Peter W; Noor, Abdisalan M; Gikandi, Priscilla W; Ogara, Esther A A; Hay, Simon I; Nixon, Mark S; Snow, Robert W; Atkinson, Peter M
2006-06-01
Reliable and timely information on disease-specific treatment burdens within a health system is critical for the planning and monitoring of service provision. Health management information systems (HMIS) exist to address this need at national scales across Africa but are failing to deliver adequate data because of widespread underreporting by health facilities. Faced with this inadequacy, vital public health decisions often rely on crudely adjusted regional and national estimates of treatment burdens. This study has taken the example of presumed malaria in outpatients within the largely incomplete Kenyan HMIS database and has defined a geostatistical modelling framework that can predict values for all data that are missing through space and time. The resulting complete set can then be used to define treatment burdens for presumed malaria at any level of spatial and temporal aggregation. Validation of the model has shown that these burdens are quantified to an acceptable level of accuracy at the district, provincial, and national scale. The modelling framework presented here provides, to our knowledge for the first time, reliable information from imperfect HMIS data to support evidence-based decision-making at national and sub-national levels.
Gething, Peter W; Noor, Abdisalan M; Gikandi, Priscilla W; Ogara, Esther A. A; Hay, Simon I; Nixon, Mark S; Snow, Robert W; Atkinson, Peter M
2006-01-01
Background Reliable and timely information on disease-specific treatment burdens within a health system is critical for the planning and monitoring of service provision. Health management information systems (HMIS) exist to address this need at national scales across Africa but are failing to deliver adequate data because of widespread underreporting by health facilities. Faced with this inadequacy, vital public health decisions often rely on crudely adjusted regional and national estimates of treatment burdens. Methods and Findings This study has taken the example of presumed malaria in outpatients within the largely incomplete Kenyan HMIS database and has defined a geostatistical modelling framework that can predict values for all data that are missing through space and time. The resulting complete set can then be used to define treatment burdens for presumed malaria at any level of spatial and temporal aggregation. Validation of the model has shown that these burdens are quantified to an acceptable level of accuracy at the district, provincial, and national scale. Conclusions The modelling framework presented here provides, to our knowledge for the first time, reliable information from imperfect HMIS data to support evidence-based decision-making at national and sub-national levels. PMID:16719557
[Quality management and participation into clinical database].
Okubo, Suguru; Miyata, Hiroaki; Tomotaki, Ai; Motomura, Noboru; Murakami, Arata; Ono, Minoru; Iwanaka, Tadashi
2013-07-01
Quality management is necessary for establishing useful clinical database in cooperation with healthcare professionals and facilities. The ways of management are 1) progress management of data entry, 2) liaison with database participants (healthcare professionals), and 3) modification of data collection form. In addition, healthcare facilities are supposed to consider ethical issues and information security for joining clinical databases. Database participants should check ethical review boards and consultation service for patients.
Tsourougiannis, Dimitrios
2017-01-01
Background : Cost-containment initiatives are re-shaping the pharmaceutical business environment and affecting market access as well as pricing and reimbursement decisions. Effective price management procedures are too complex to accomplish manually. Prior to February 2013, price management within Astellas Pharma Europe Ltd was done manually using an Excel database. The system was labour intensive, slow to update, and prone to error. An innovative web-based pricing information management system was developed to address the shortcomings of the previous system. Development : A secure web-based system for submitting, reviewing and approving pricing requests was designed to: track all pricing applications and approval status; update approved pricing information automatically; provide fixed and customizable reports of pricing information; collect pricing and reimbursement rules from each country; validate pricing and reimbursement rules monthly. Several sequential phases of development emphasized planning, time schedules, target dates, budgets and implementation of the entire system. A test system was used to pilot the electronic (e)-pricing system with three affiliates (four users) in February 2013. Outcomes : The web-based system was introduced in March 2013, currently has about 227 active users globally and comprises more than 1000 presentations of 150 products. The overall benefits of switching from a manual to an e-pricing system were immediate and highly visible in terms of efficiency, transparency, reliability and compliance. Conclusions : The e-pricing system has improved the efficiency, reliability, compliance, transparency and ease of access to multinational drug pricing and approval information.
Tsourougiannis, Dimitrios
2017-01-01
ABSTRACT Background: Cost-containment initiatives are re-shaping the pharmaceutical business environment and affecting market access as well as pricing and reimbursement decisions. Effective price management procedures are too complex to accomplish manually. Prior to February 2013, price management within Astellas Pharma Europe Ltd was done manually using an Excel database. The system was labour intensive, slow to update, and prone to error. An innovative web-based pricing information management system was developed to address the shortcomings of the previous system. Development: A secure web-based system for submitting, reviewing and approving pricing requests was designed to: track all pricing applications and approval status; update approved pricing information automatically; provide fixed and customizable reports of pricing information; collect pricing and reimbursement rules from each country; validate pricing and reimbursement rules monthly. Several sequential phases of development emphasized planning, time schedules, target dates, budgets and implementation of the entire system. A test system was used to pilot the electronic (e)-pricing system with three affiliates (four users) in February 2013. Outcomes: The web-based system was introduced in March 2013, currently has about 227 active users globally and comprises more than 1000 presentations of 150 products. The overall benefits of switching from a manual to an e-pricing system were immediate and highly visible in terms of efficiency, transparency, reliability and compliance. Conclusions: The e-pricing system has improved the efficiency, reliability, compliance, transparency and ease of access to multinational drug pricing and approval information. PMID:28740622
Defining and assessing professional competence.
Epstein, Ronald M; Hundert, Edward M
2002-01-09
Current assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice. To propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment. We searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents. We excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations. Data were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs. We generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes. In addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.
A manual for a laboratory information management system (LIMS) for light stable isotopes
Coplen, Tyler B.
1997-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
A manual for a Laboratory Information Management System (LIMS) for light stable isotopes
Coplen, Tyler B.
1998-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
Short Fiction on Film: A Relational DataBase.
ERIC Educational Resources Information Center
May, Charles
Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2012 CFR
2012-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2013 CFR
2013-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2011 CFR
2011-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
Xia, Jun; Tashpolat, Tiyip; Zhang, Fei; Ji, Hong-jiang
2011-07-01
The characteristic of object spectrum is not only the base of the quantification analysis of remote sensing, but also the main content of the basic research of remote sensing. The typical surface object spectral database in arid areas oasis is of great significance for applied research on remote sensing in soil salinization. In the present paper, the authors took the Ugan-Kuqa River Delta Oasis as an example, unified .NET and the SuperMap platform with SQL Server database stored data, used the B/S pattern and the C# language to design and develop the typical surface object spectral information system, and established the typical surface object spectral database according to the characteristics of arid areas oasis. The system implemented the classified storage and the management of typical surface object spectral information and the related attribute data of the study areas; this system also implemented visualized two-way query between the maps and attribute data, the drawings of the surface object spectral response curves and the processing of the derivative spectral data and its drawings. In addition, the system initially possessed a simple spectral data mining and analysis capabilities, and this advantage provided an efficient, reliable and convenient data management and application platform for the Ugan-Kuqa River Delta Oasis's follow-up study in soil salinization. Finally, It's easy to maintain, convinient for secondary development and practically operating in good condition.
Geodata Modeling and Query in Geographic Information Systems
NASA Technical Reports Server (NTRS)
Adam, Nabil
1996-01-01
Geographic information systems (GIS) deal with collecting, modeling, man- aging, analyzing, and integrating spatial (locational) and non-spatial (attribute) data required for geographic applications. Examples of spatial data are digital maps, administrative boundaries, road networks, and those of non-spatial data are census counts, land elevations and soil characteristics. GIS shares common areas with a number of other disciplines such as computer- aided design, computer cartography, database management, and remote sensing. None of these disciplines however, can by themselves fully meet the requirements of a GIS application. Examples of such requirements include: the ability to use locational data to produce high quality plots, perform complex operations such as network analysis, enable spatial searching and overlay operations, support spatial analysis and modeling, and provide data management functions such as efficient storage, retrieval, and modification of large datasets; independence, integrity, and security of data; and concurrent access to multiple users. It is on the data management issues that we devote our discussions in this monograph. Traditionally, database management technology have been developed for business applications. Such applications require, among other things, capturing the data requirements of high-level business functions and developing machine- level implementations; supporting multiple views of data and yet providing integration that would minimize redundancy and maintain data integrity and security; providing a high-level language for data definition and manipulation; allowing concurrent access to multiple users; and processing user transactions in an efficient manner. The demands on database management systems have been for speed, reliability, efficiency, cost effectiveness, and user-friendliness. Significant progress have been made in all of these areas over the last two decades to the point that many generalized database platforms are now available for developing data intensive applications that run in real-time. While continuous improvement is still being made at a very fast-paced and competitive rate, new application areas such as computer aided design, image processing, VLSI design, and GIS have been identified by many as the next generation of database applications. These new application areas pose serious challenges to the currently available database technology. At the core of these challenges is the nature of data that is manipulated. In traditional database applications, the database objects do not have any spatial dimension, and as such, can be thought of as point data in a multi-dimensional space. For example, each instance of an entity EMPLOYEE will have a unique value corresponding to every attribute such as employee id, employee name, employee address and so on. Thus, every Employee instance can be thought of as a point in a multi-dimensional space where each dimension is represented by an attribute. Furthermore, all operations on such data are one-dimensional. Thus, users may retrieve all entities satisfying one or more constraints. Examples of such constraints include employees with addresses in a certain area code, or salaries within a certain range. Even though constraints can be specified on multiple attributes (dimensions), the search for such data is essentially orthogonal across these dimensions.
Operational modes, health, and status monitoring
NASA Astrophysics Data System (ADS)
Taljaard, Corrie
2016-08-01
System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.
Microcomputer Database Management Systems for Bibliographic Data.
ERIC Educational Resources Information Center
Pollard, Richard
1986-01-01
Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)
The Data Base and Decision Making in Public Schools.
ERIC Educational Resources Information Center
Hedges, William D.
1984-01-01
Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Preheim, Larry E.
1990-01-01
Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2014 CFR
2014-10-01
... individual database managers; and to perform other functions as needed for the administration of the TV bands... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers...
An Evaluation of Online Business Databases.
ERIC Educational Resources Information Center
van der Heyde, Angela J.
The purpose of this study was to evaluate the credibility and timeliness of online business databases. The areas of evaluation were the currency, reliability, and extent of financial information in the databases. These were measured by performing an online search for financial information on five U.S. companies. The method of selection for the…
Bridging the Gap between the Data Base and User in a Distributed Environment.
ERIC Educational Resources Information Center
Howard, Richard D.; And Others
1989-01-01
The distribution of databases physically separates users from those who administer the database and the administrators who perform database administration. By drawing on the work of social scientists in reliability and validity, a set of concepts and a list of questions to ensure data quality were developed. (Author/MLW)
Database Management Systems: New Homes for Migrating Bibliographic Records.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Bierbaum, Esther G.
1987-01-01
Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…
Network-based statistical comparison of citation topology of bibliographic databases
Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko
2014-01-01
Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231
Expediting topology data gathering for the TOPDB database.
Dobson, László; Langó, Tamás; Reményi, István; Tusnády, Gábor E
2015-01-01
The Topology Data Bank of Transmembrane Proteins (TOPDB, http://topdb.enzim.ttk.mta.hu) contains experimentally determined topology data of transmembrane proteins. Recently, we have updated TOPDB from several sources and utilized a newly developed topology prediction algorithm to determine the most reliable topology using the results of experiments as constraints. In addition to collecting the experimentally determined topology data published in the last couple of years, we gathered topographies defined by the TMDET algorithm using 3D structures from the PDBTM. Results of global topology analysis of various organisms as well as topology data generated by high throughput techniques, like the sequential positions of N- or O-glycosylations were incorporated into the TOPDB database. Moreover, a new algorithm was developed to integrate scattered topology data from various publicly available databases and a new method was introduced to measure the reliability of predicted topologies. We show that reliability values highly correlate with the per protein topology accuracy of the utilized prediction method. Altogether, more than 52,000 new topology data and more than 2600 new transmembrane proteins have been collected since the last public release of the TOPDB database. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Construction of databases: advances and significance in clinical research.
Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian
2015-12-01
Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.
VerSeDa: vertebrate secretome database
Cortazar, Ana R.; Oguiza, José A.
2017-01-01
Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. Database URL: VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php PMID:28365718
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Edward J., Jr.; Henry, Karen Lynne
Sandia National Laboratories develops technologies to: (1) sustain, modernize, and protect our nuclear arsenal (2) Prevent the spread of weapons of mass destruction; (3) Provide new capabilities to our armed forces; (4) Protect our national infrastructure; (5) Ensure the stability of our nation's energy and water supplies; and (6) Defend our nation against terrorist threats. We identified the need for a single overarching Integrated Workplace Management System (IWMS) that would enable us to focus on customer missions and improve FMOC processes. Our team selected highly configurable commercial-off-the-shelf (COTS) software with out-of-the-box workflow processes that integrate strategic planning, project management, facilitymore » assessments, and space management, and can interface with existing systems, such as Oracle, PeopleSoft, Maximo, Bentley, and FileNet. We selected the Integrated Workplace Management System (IWMS) from Tririga, Inc. Facility Management System (FMS) Benefits are: (1) Create a single reliable source for facility data; (2) Improve transparency with oversight organizations; (3) Streamline FMOC business processes with a single, integrated facility-management tool; (4) Give customers simple tools and real-time information; (5) Reduce indirect costs; (6) Replace approximately 30 FMOC systems and 60 homegrown tools (such as Microsoft Access databases); and (7) Integrate with FIMS.« less
Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro
2016-06-01
Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Waugh, Shirley Moore; Bergquist-Beringer, Sandra
2016-06-01
In this descriptive multi-site study, we examined inter-rater agreement on 11 National Database of Nursing Quality Indicators(®) (NDNQI(®) ) pressure ulcer (PrU) risk and prevention measures. One hundred twenty raters at 36 hospitals captured data from 1,637 patient records. At each hospital, agreement between the most experienced rater and each other team rater was calculated for each measure. In the ratings studied, 528 patients were rated as "at risk" for PrU and, therefore, were included in calculations of agreement for the prevention measures. Prevalence-adjusted kappa (PAK) was used to interpret inter-rater agreement because prevalence of single responses was high. The PAK values for eight measures indicated "substantial" to "near perfect" agreement between most experienced and other team raters: Skin assessment on admission (.977, 95% CI [.966-.989]), PrU risk assessment on admission (.978, 95% CI [.964-.993]), Time since last risk assessment (.790, 95% CI [.729-.852]), Risk assessment method (.997, 95% CI [.991-1.0]), Risk status (.877, 95% CI [.838-.917]), Any prevention (.856, 95% CI [.76-.943]), Skin assessment (.956, 95% CI [.904-1.0]), and Pressure-redistribution surface use (.839, 95% CI [.763-.916]). For three intervention measures, PAK values fell below the recommended value of ≥.610: Routine repositioning (.577, 95% CI [.494-.661]), Nutritional support (.500, 95% CI [.418-.581]), and Moisture management (.556, 95% CI [.469-.643]). Areas of disagreement were identified. Findings provide support for the reliability of 8 of the 11 measures. Further clarification of data collection procedures is needed to improve reliability for the less reliable measures. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G
2007-10-05
An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Louise F.; Harmon, Anna C.
2015-04-01
Thermal and moisture problems in existing basements create a unique challenge because the exterior face of the wall is not easily or inexpensively accessible. This approach addresses thermal and moisture management from the interior face of the wall without disturbing the exterior soil and landscaping. the interior and exterior environments. This approach has the potential for improving durability, comfort, and indoor air quality. This project was funded jointly by the National Renewable Energy Laboratory (NREL) and Oak Ridge National Laboratory (ORNL). ORNL focused on developing a full basement wall system experimental database to enable others to validate hygrothermal simulation codes.more » NREL focused on testing the moisture durability of practical basement wall interior insulation retrofit solutions for cold climates. The project has produced a physically credible and reliable long-term hygrothermal performance database for retrofit foundation wall insulation systems in zone 6 and 7 climates that are fully compliant with the performance criteria in the 2009 Minnesota Energy Code. The experimental data were configured into a standard format that can be published online and that is compatible with standard commercially available spreadsheet and database software.« less
GIS Application System Design Applied to Information Monitoring
NASA Astrophysics Data System (ADS)
Qun, Zhou; Yujin, Yuan; Yuena, Kang
Natural environment information management system involves on-line instrument monitoring, data communications, database establishment, information management software development and so on. Its core lies in collecting effective and reliable environmental information, increasing utilization rate and sharing degree of environment information by advanced information technology, and maximizingly providing timely and scientific foundation for environmental monitoring and management. This thesis adopts C# plug-in application development and uses a set of complete embedded GIS component libraries and tools libraries provided by GIS Engine to finish the core of plug-in GIS application framework, namely, the design and implementation of framework host program and each functional plug-in, as well as the design and implementation of plug-in GIS application framework platform. This thesis adopts the advantages of development technique of dynamic plug-in loading configuration, quickly establishes GIS application by visualized component collaborative modeling and realizes GIS application integration. The developed platform is applicable to any application integration related to GIS application (ESRI platform) and can be as basis development platform of GIS application development.
ERIC Educational Resources Information Center
Freeman, Carla; And Others
In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Excluded Parties Listing System (EPLS) databases into the System for Award Management (SAM) database. DATES... combined the functional capabilities of the CCR, ORCA, and EPLS procurement systems into the SAM database... identification number and the type of organization from the System for Award Management database. 0 3. Revise the...
An Examination of Selected Software Testing Tools: 1992
1992-12-01
Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows
Drozdovitch, Vladimir; Zhukova, Olga; Germenchuk, Maria; Khrutchinsky, Arkady; Kukhta, Tatiana; Luckyanov, Nickolas; Minenko, Victor; Podgaiskaya, Marina; Savkin, Mikhail; Vakulovsky, Sergey; Voillequé, Paul; Bouville, André
2012-01-01
Results of all available meteorological and radiation measurements that were performed in Belarus during the first three months after the Chernobyl accident were collected from various sources and incorporated into a single database. Meteorological information such as precipitation, wind speed and direction, and temperature in localities were obtained from meteorological station facilities. Radiation measurements include gamma-exposure rate in air, daily fallout, concentration of different radionuclides in soil, grass, cow’s milk and water as well as total beta-activity in cow’s milk. Considerable efforts were made to evaluate the reliability of the measurements that were collected. The electronic database can be searched according to type of measurement, date, and location. The main purpose of the database is to provide reliable data that can be used in the reconstruction of thyroid doses resulting from the Chernobyl accident. PMID:23103580
On the reliable use of satellite-derived surface water products for global flood monitoring
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Revilla-Romero, B.; Thielen, J.; Salamon, P.; Brakenridge, R.; Pappenberger, F.; de Groeve, T.
2015-12-01
Early flood warning and real-time monitoring systems play a key role in flood risk reduction and disaster response management. To this end, real-time flood forecasting and satellite-based detection systems have been developed at global scale. However, due to the limited availability of up-to-date ground observations, the reliability of these systems for real-time applications have not been assessed in large parts of the globe. In this study, we performed comparative evaluations of the commonly used satellite-based global flood detections and operational flood forecasting system using 10 major flood cases reported over three years (2012-2014). Specially, we assessed the flood detection capabilities of the near real-time global flood maps from the Global Flood Detection System (GFDS), and from the Moderate Resolution Imaging Spectroradiometer (MODIS), and the operational forecasts from the Global Flood Awareness System (GloFAS) for the major flood events recorded in global flood databases. We present the evaluation results of the global flood detection and forecasting systems in terms of correctly indicating the reported flood events and highlight the exiting limitations of each system. Finally, we propose possible ways forward to improve the reliability of large scale flood monitoring tools.
Database Searching by Managers.
ERIC Educational Resources Information Center
Arnold, Stephen E.
Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…
Patterson, P Daniel; Weaver, Matthew D; Fabio, Anthony; Teasley, Ellen M; Renn, Megan L; Curtis, Brett R; Matthews, Margaret E; Kroemer, Andrew J; Xun, Xiaoshuang; Bizhanova, Zhadyra; Weiss, Patricia M; Sequeira, Denisse J; Coppler, Patrick J; Lang, Eddy S; Higgins, J Stephen
2018-02-15
This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. A systematic review study design was used and searched six databases, including one website. The research question guiding the search was developed a priori and registered with the PROSPERO database of systematic reviews: "Are there reliable and valid instruments for measuring fatigue among EMS personnel?" (2016:CRD42016040097). The primary outcome of interest was criterion-related validity. Important outcomes of interest included reliability (e.g., internal consistency), and indicators of sensitivity and specificity. Members of the research team independently screened records from the databases. Full-text articles were evaluated by adapting the Bolster and Rourke system for categorizing findings of systematic reviews, and the rated data abstracted from the body of literature as favorable, unfavorable, mixed/inconclusive, or no impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology was used to evaluate the quality of evidence. The search strategy yielded 1,257 unique records. Thirty-four unique experimental and non-experimental studies were determined relevant following full-text review. Nineteen studies reported on the reliability and/or validity of ten different fatigue survey instruments. Eighteen different studies evaluated the reliability and/or validity of four different sleepiness survey instruments. None of the retained studies reported sensitivity or specificity. Evidence quality was rated as very low across all outcomes. In this systematic review, limited evidence of the reliability and validity of 14 different survey instruments to assess the fatigue and/or sleepiness status of EMS personnel and related shift worker groups was identified.
Experiences with Two Reliability Data Collection Efforts (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, S.; Lantz, E.
2013-08-01
This presentation, given by NREL at the Wind Reliability Experts Meeting in Albuquerque, New Mexico, outlines the causes of wind plant operational expenditures and gearbox failures and describes NREL's efforts to create a gearbox failure database.
NASA Astrophysics Data System (ADS)
García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João
2017-08-01
Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.
Generalized Database Management System Support for Numeric Database Environments.
ERIC Educational Resources Information Center
Dominick, Wayne D.; Weathers, Peggy G.
1982-01-01
This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…
A Note on the Score Reliability for the Satisfaction with Life Scale: An RG Study
ERIC Educational Resources Information Center
Vassar, Matt
2008-01-01
The purpose of the present study was to meta-analytically investigate the score reliability for the Satisfaction With Life Scale. Four-hundred and sixteen articles using the measure were located through electronic database searches and then separated to identify studies which had calculated reliability estimates from their own data. Sixty-two…
High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications
NASA Technical Reports Server (NTRS)
Callahan, John; Montgomery, Todd; Whetten, Brian
1997-01-01
Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.
NASA Astrophysics Data System (ADS)
Manukalo, V.
2012-12-01
Defining issue The river inundations are the most common and destructive natural hazards in Ukraine. Among non-structural flood management and protection measures a creation of the Early Flood Warning System is extremely important to be able to timely recognize dangerous situations in the flood-prone areas. Hydrometeorological information and forecasts are a core importance in this system. The primary factors affecting reliability and a lead - time of forecasts include: accuracy, speed and reliability with which real - time data are collected. The existing individual conception of monitoring and forecasting resulted in a need in reconsideration of the concept of integrated monitoring and forecasting approach - from "sensors to database and forecasters". Result presentation The Project: "Development of Flood Monitoring and Forecasting in the Ukrainian part of the Dniester River Basin" is presented. The project is developed by the Ukrainian Hydrometeorological Service in a conjunction with the Water Management Agency and the Energy Company "Ukrhydroenergo". The implementation of the Project is funded by the Ukrainian Government and the World Bank. The author is nominated as the responsible person for coordination of activity of organizations involved in the Project. The term of the Project implementation: 2012 - 2014. The principal objectives of the Project are: a) designing integrated automatic hydrometeorological measurement network (including using remote sensing technologies); b) hydrometeorological GIS database construction and coupling with electronic maps for flood risk assessment; c) interface-construction classic numerical database -GIS and with satellite images, and radar data collection; d) providing the real-time data dissemination from observation points to forecasting centers; e) developing hydrometeoroogical forecasting methods; f) providing a flood hazards risk assessment for different temporal and spatial scales; g) providing a dissemination of current information, forecasts and warnings to consumers automatically. Besides scientific and technical issues the implementation of these objectives requires solution of a number of organizational issues. Thus, as a result of the increased complexity of types of hydrometeorological data and in order to develop forecasting methods, a reconsideration of meteorological and hydrological measurement networks should be carried out. The "optimal density of measuring networks" is proposed taking into account principal terms: a) minimizing an uncertainty in characterizing the spacial distribution of hydrometeorological parameters; b) minimizing the Total Life Cycle Cost of creation and maintenance of measurement networks. Much attention will be given to training Ukrainian disaster management authorities from the Ministry of Emergencies and the Water Management Agency to identify the flood hazard risk level and to indicate the best protection measures on the basis of continuous monitoring and forecasts of evolution of meteorological and hydrological conditions in the river basin.
Angelow, Aniela; Schmidt, Matthias; Weitmann, Kerstin; Schwedler, Susanne; Vogt, Hannes; Havemann, Christoph; Hoffmann, Wolfgang
2008-07-01
In our report we describe concept, strategies and implementation of a central biosample and data management (CSDM) system in the three-centre clinical study of the Transregional Collaborative Research Centre "Inflammatory Cardiomyopathy - Molecular Pathogenesis and Therapy" SFB/TR 19, Germany. Following the requirements of high system resource availability, data security, privacy protection and quality assurance, a web-based CSDM was developed based on Java 2 Enterprise Edition using an Oracle database. An efficient and reliable sample documentation system using bar code labelling, a partitioning storage algorithm and an online documentation software was implemented. An online electronic case report form is used to acquire patient-related data. Strict rules for access to the online applications and secure connections are used to account for privacy protection and data security. Challenges for the implementation of the CSDM resided at project, technical and organisational level as well as at staff level.
CRAB3: Establishing a new generation of services for distributed analysis at CMS
NASA Astrophysics Data System (ADS)
Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.
2012-12-01
In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.
Building Databases for Education. ERIC Digest.
ERIC Educational Resources Information Center
Klausmeier, Jane A.
This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…
ERIC Educational Resources Information Center
American Society for Information Science, Washington, DC.
This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…
How to: identify non-tuberculous Mycobacterium species using MALDI-TOF mass spectrometry.
Alcaide, F; Amlerová, J; Bou, G; Ceyssens, P J; Coll, P; Corcoran, D; Fangous, M-S; González-Álvarez, I; Gorton, R; Greub, G; Hery-Arnaud, G; Hrábak, J; Ingebretsen, A; Lucey, B; Marekoviċ, I; Mediavilla-Gradolph, C; Monté, M R; O'Connor, J; O'Mahony, J; Opota, O; O'Reilly, B; Orth-Höller, D; Oviaño, M; Palacios, J J; Palop, B; Pranada, A B; Quiroga, L; Rodríguez-Temporal, D; Ruiz-Serrano, M J; Tudó, G; Van den Bossche, A; van Ingen, J; Rodriguez-Sanchez, B
2018-06-01
The implementation of MALDI-TOF MS for microorganism identification has changed the routine of the microbiology laboratories as we knew it. Most microorganisms can now be reliably identified within minutes using this inexpensive, user-friendly methodology. However, its application in the identification of mycobacteria isolates has been hampered by the structure of their cell wall. Improvements in the sample processing method and in the available database have proved key factors for the rapid and reliable identification of non-tuberculous mycobacteria isolates using MALDI-TOF MS. The main objective is to provide information about the proceedings for the identification of non-tuberculous isolates using MALDI-TOF MS and to review different sample processing methods, available databases, and the interpretation of the results. Results from relevant studies on the use of the available MALDI-TOF MS instruments, the implementation of innovative sample processing methods, or the implementation of improved databases are discussed. Insight about the methodology required for reliable identification of non-tuberculous mycobacteria and its implementation in the microbiology laboratory routine is provided. Microbiology laboratories where MALDI-TOF MS is available can benefit from its capacity to identify most clinically interesting non-tuberculous mycobacteria in a rapid, reliable, and inexpensive manner. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R
2003-01-01
Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545
Giddings, Jeffrey M; Campana, David; Nair, Shyam; Brain, Richard
2018-04-16
The US Environmental Protection Agency (USEPA) has historically used different methods to derive an aquatic level of concern (LoC) for atrazine, though all have generally relied on an expanding set of mesocosm and microcosm ("cosm") studies for calibration. The database of results from ecological effects studies with atrazine in cosms now includes 108 data points from 39 studies and forms the basis for assessing atrazine's potential to impact aquatic plant communities. Inclusion of the appropriate cosm studies and accurate interpretation of each data point-delineated as binary scores of "effect" (effect score 1) or "no effect" (effect score 0) of a specific atrazine exposure profile on plant communities in a single study-is critical to USEPA's approach to determining the LoC. We reviewed the atrazine cosm studies in detail and carefully interpreted their results in terms of the binary effect scores. The cosm database includes a wide range of experimental systems and study designs, some of which are more relevant to natural plant communities than others. Moreover, the studies vary in the clarity and consistency of their results. We therefore evaluated each study against objective criteria for relevance and reliability to produce a weighting score that can be applied to the effect scores when calculating the LoC. This approach is useful because studies that are more relevant and reliable have greater influence on the LoC than studies with lower weighting scores. When the current iteration of USEPA's LoC approach, referred to as the plant assemblage toxicity index (PATI), was calibrated with the weighted cosm data set, the result was a 60-day LoC of 21.2 μg/L. Integr Environ Assess Manag 2018;00:000-000. © 2018 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2018 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bower, J.C.; Burford, M.J.; Downing, T.R.
The Integrated Baseline System (IBS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Nuclear and Chemical Agency (USANCA). The IBS Data Management Guide provides the background, as well as the operations and procedures needed to generate and maintain a site-specific map database. Data and system managers use this guide to manage the data files and database that support the administrative, user-environment, database management, and operational capabilities of the IBS. This document provides a description of the data files and structures necessary for running the IBS software and using themore » site map database.« less
The HISTMAG database: combining historical, archaeomagnetic and volcanic data
NASA Astrophysics Data System (ADS)
Arneitz, Patrick; Leonhardt, Roman; Schnepp, Elisabeth; Heilig, Balázs; Mayrhofer, Franziska; Kovacs, Peter; Hejda, Pavel; Valach, Fridrich; Vadasz, Gergely; Hammerl, Christa; Egli, Ramon; Fabian, Karl; Kompein, Niko
2017-09-01
Records of the past geomagnetic field can be divided into two main categories. These are instrumental historical observations on the one hand, and field estimates based on the magnetization acquired by rocks, sediments and archaeological artefacts on the other hand. In this paper, a new database combining historical, archaeomagnetic and volcanic records is presented. HISTMAG is a relational database, implemented in MySQL, and can be accessed via a web-based interface (http://www.conrad-observatory.at/zamg/index.php/data-en/histmag-database). It combines available global historical data compilations covering the last ∼500 yr as well as archaeomagnetic and volcanic data collections from the last 50 000 yr. Furthermore, new historical and archaeomagnetic records, mainly from central Europe, have been acquired. In total, 190 427 records are currently available in the HISTMAG database, whereby the majority is related to historical declination measurements (155 525). The original database structure was complemented by new fields, which allow for a detailed description of the different data types. A user-comment function provides the possibility for a scientific discussion about individual records. Therefore, HISTMAG database supports thorough reliability and uncertainty assessments of the widely different data sets, which are an essential basis for geomagnetic field reconstructions. A database analysis revealed systematic offset for declination records derived from compass roses on historical geographical maps through comparison with other historical records, while maps created for mining activities represent a reliable source.
[Selected aspects of computer-assisted literature management].
Reiss, M; Reiss, G
1998-01-01
We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.
HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan S.; Frew, David J.
2016-07-01
By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.
Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.
ERIC Educational Resources Information Center
Gutmann, Myron P.; And Others
1989-01-01
Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)
Content Independence in Multimedia Databases.
ERIC Educational Resources Information Center
de Vries, Arjen P.
2001-01-01
Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…
Patel, Vanash M.; Ashrafian, Hutan; Almoudaris, Alex; Makanjuola, Jonathan; Bucciarelli-Ducci, Chiara; Darzi, Ara; Athanasiou, Thanos
2013-01-01
Objectives To compare H index scores for healthcare researchers returned by Google Scholar, Web of Science and Scopus databases, and to assess whether a researcher's age, country of institutional affiliation and physician status influences calculations. Subjects and Methods One hundred and ninety-five Nobel laureates in Physiology and Medicine from 1901 to 2009 were considered. Year of first and last publications, total publications and citation counts, and the H index for each laureate were calculated from each database. Cronbach's alpha statistics was used to measure the reliability of H index scores between the databases. Laureate characteristic influence on the H index was analysed using linear regression. Results There was no concordance between the databases when considering the number of publications and citations count per laureate. The H index was the most reliably calculated bibliometric across the three databases (Cronbach's alpha = 0.900). All databases returned significantly higher H index scores for younger laureates (p < 0.0001). Google Scholar and Web of Science returned significantly higher H index for physician laureates (p = 0.025 and p = 0.029, respectively). Country of institutional affiliation did not influence the H index in any database. Conclusion The H index appeared to be the most consistently calculated bibliometric between the databases for Nobel laureates in Physiology and Medicine. Researcher-specific characteristics constituted an important component of objective research assessment. The findings of this study call to question the choice of current and future academic performance databases. PMID:22964880
Development of expert systems for analyzing electronic documents
NASA Astrophysics Data System (ADS)
Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.
2018-05-01
The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.
NASA Technical Reports Server (NTRS)
Moroh, Marsha
1988-01-01
A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.
Zilaout, Hicham; Vlaanderen, Jelle; Houba, Remko; Kromhout, Hans
2017-07-01
In 2000, a prospective Dust Monitoring Program (DMP) was started in which measurements of worker's exposure to respirable dust and quartz are collected in member companies from the European Industrial Minerals Association (IMA-Europe). After 15 years, the resulting IMA-DMP database allows a detailed overview of exposure levels of respirable dust and quartz over time within this industrial sector. Our aim is to describe the IMA-DMP and the current state of the corresponding database which due to continuation of the IMA-DMP is still growing. The future use of the database will also be highlighted including its utility for the industrial minerals producing sector. Exposure data are being obtained following a common protocol including a standardized sampling strategy, standardized sampling and analytical methods and a data management system. Following strict quality control procedures, exposure data are consequently added to a central database. The data comprises personal exposure measurements including auxiliary information on work and other conditions during sampling. Currently, the IMA-DMP database consists of almost 28,000 personal measurements which have been performed from 2000 until 2015 representing 29 half-yearly sampling campaigns. The exposure data have been collected from 160 different worksites owned by 35 industrial mineral companies and comes from 23 European countries and approximately 5000 workers. The IMA-DMP database provides the European minerals sector with reliable data regarding worker personal exposures to respirable dust and quartz. The database can be used as a powerful tool to address outstanding scientific issues on long-term exposure trends and exposure variability, and importantly, as a surveillance tool to evaluate exposure control measures. The database will be valuable for future epidemiological studies on respiratory health effects and will allow for estimation of quantitative exposure response relationships. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.
Computer Security Products Technology Overview
1988-10-01
13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a
An Introduction to Database Management Systems.
ERIC Educational Resources Information Center
Warden, William H., III; Warden, Bette M.
1984-01-01
Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…
NASA Technical Reports Server (NTRS)
Aruljothi, Arunvenkatesh
2016-01-01
The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.
VerSeDa: vertebrate secretome database.
Cortazar, Ana R; Oguiza, José A; Aransay, Ana M; Lavín, José L
2017-01-01
Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php. © The Author(s) 2017. Published by Oxford University Press.
Li, Yuanfang; Zhou, Zhiwei
2016-02-01
Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.
Frost, Rachael; Levati, Sara; McClurg, Doreen; Brady, Marian; Williams, Brian
2017-06-01
To systematically review methods for measuring adherence used in home-based rehabilitation trials and to evaluate their validity, reliability, and acceptability. In phase 1 we searched the CENTRAL database, NHS Economic Evaluation Database, and Health Technology Assessment Database (January 2000 to April 2013) to identify adherence measures used in randomized controlled trials of allied health professional home-based rehabilitation interventions. In phase 2 we searched the databases of MEDLINE, Embase, CINAHL, Allied and Complementary Medicine Database, PsycINFO, CENTRAL, ProQuest Nursing and Allied Health, and Web of Science (inception to April 2015) for measurement property assessments for each measure. Studies assessing the validity, reliability, or acceptability of adherence measures. Two reviewers independently extracted data on participant and measure characteristics, measurement properties evaluated, evaluation methods, and outcome statistics and assessed study quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist. In phase 1 we included 8 adherence measures (56 trials). In phase 2, from the 222 measurement property assessments identified in 109 studies, 22 high-quality measurement property assessments were narratively synthesized. Low-quality studies were used as supporting data. StepWatch Activity Monitor validly and acceptably measured short-term step count adherence. The Problematic Experiences of Therapy Scale validly and reliably assessed adherence to vestibular rehabilitation exercises. Adherence diaries had moderately high validity and acceptability across limited populations. The Borg 6 to 20 scale, Bassett and Prapavessis scale, and Yamax CW series had insufficient validity. Low-quality evidence supported use of the Joint Protection Behaviour Assessment. Polar A1 series heart monitors were considered acceptable by 1 study. Current rehabilitation adherence measures are limited. Some possess promising validity and acceptability for certain parameters of adherence, situations, and populations and should be used in these situations. Rigorous evaluation of adherence measures in a broader range of populations is needed. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
MACSIMS : multiple alignment of complete sequences information management system
Thompson, Julie D; Muller, Arnaud; Waterhouse, Andrew; Procter, Jim; Barton, Geoffrey J; Plewniak, Frédéric; Poch, Olivier
2006-01-01
Background In the post-genomic era, systems-level studies are being performed that seek to explain complex biological systems by integrating diverse resources from fields such as genomics, proteomics or transcriptomics. New information management systems are now needed for the collection, validation and analysis of the vast amount of heterogeneous data available. Multiple alignments of complete sequences provide an ideal environment for the integration of this information in the context of the protein family. Results MACSIMS is a multiple alignment-based information management program that combines the advantages of both knowledge-based and ab initio sequence analysis methods. Structural and functional information is retrieved automatically from the public databases. In the multiple alignment, homologous regions are identified and the retrieved data is evaluated and propagated from known to unknown sequences with these reliable regions. In a large-scale evaluation, the specificity of the propagated sequence features is estimated to be >99%, i.e. very few false positive predictions are made. MACSIMS is then used to characterise mutations in a test set of 100 proteins that are known to be involved in human genetic diseases. The number of sequence features associated with these proteins was increased by 60%, compared to the features available in the public databases. An XML format output file allows automatic parsing of the MACSIM results, while a graphical display using the JalView program allows manual analysis. Conclusion MACSIMS is a new information management system that incorporates detailed analyses of protein families at the structural, functional and evolutionary levels. MACSIMS thus provides a unique environment that facilitates knowledge extraction and the presentation of the most pertinent information to the biologist. A web server and the source code are available at . PMID:16792820
Managing Written Directives: A Software Solution to Streamline Workflow.
Wagner, Robert H; Savir-Baruch, Bital; Gabriel, Medhat S; Halama, James R; Bova, Davide
2017-06-01
A written directive is required by the U.S. Nuclear Regulatory Commission for any use of 131 I above 1.11 MBq (30 μCi) and for patients receiving radiopharmaceutical therapy. This requirement has also been adopted and must be enforced by the agreement states. As the introduction of new radiopharmaceuticals increases therapeutic options in nuclear medicine, time spent on regulatory paperwork also increases. The pressure of managing these time-consuming regulatory requirements may heighten the potential for inaccurate or incomplete directive data and subsequent regulatory violations. To improve on the paper-trail method of directive management, we created a software tool using a Health Insurance Portability and Accountability Act (HIPAA)-compliant database. This software allows for secure data-sharing among physicians, technologists, and managers while saving time, reducing errors, and eliminating the possibility of loss and duplication. Methods: The software tool was developed using Visual Basic, which is part of the Visual Studio development environment for the Windows platform. Patient data are deposited in an Access database on a local HIPAA-compliant secure server or hard disk. Once a working version had been developed, it was installed at our institution and used to manage directives. Updates and modifications of the software were released regularly until no more significant problems were found with its operation. Results: The software has been used at our institution for over 2 y and has reliably kept track of all directives. All physicians and technologists use the software daily and find it superior to paper directives. They can retrieve active directives at any stage of completion, as well as completed directives. Conclusion: We have developed a software solution for the management of written directives that streamlines and structures the departmental workflow. This solution saves time, centralizes the information for all staff to share, and decreases confusion about the creation, completion, filing, and retrieval of directives. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Bowie, Paul; Price, Julie; Hepworth, Neil; Dinwoodie, Mark; McKay, John
2015-11-27
To analyse a medical protection organisation's database to identify hazards related to general practice systems for ordering laboratory tests, managing test results and communicating test result outcomes to patients. To integrate these data with other published evidence sources to inform design of a systems-based conceptual model of related hazards. A retrospective database analysis. General practices in the UK and Ireland. 778 UK and Ireland general practices participating in a medical protection organisation's clinical risk self-assessment (CRSA) programme from January 2008 to December 2014. Proportion of practices with system risks; categorisation of identified hazards; most frequently occurring hazards; development of a conceptual model of hazards; and potential impacts on health, well-being and organisational performance. CRSA visits were undertaken to 778 UK and Ireland general practices of which a range of systems hazards were recorded across the laboratory test ordering and results management systems in 647 practices (83.2%). A total of 45 discrete hazard categories were identified with a mean of 3.6 per practice (SD=1.94). The most frequently occurring hazard was the inadequate process for matching test requests and results received (n=350, 54.1%). Of the 1604 instances where hazards were recorded, the most frequent was at the 'postanalytical test stage' (n=702, 43.8%), followed closely by 'communication outcomes issues' (n=628, 39.1%). Based on arguably the largest data set currently available on the subject matter, our study findings shed new light on the scale and nature of hazards related to test results handling systems, which can inform future efforts to research and improve the design and reliability of these systems. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-24
... Transmission Vegetation Management Reliability Standard; Notice of Compliance Filing Take notice that on July 12, 2013, the North American Electric Reliability Corporation (NERC), pursuant to Order No. 777 \\1... Reliability Standard FAC-003-2 to its Web site. \\1\\ Revisions to Reliability Standard for Transmission...
Lindsley, Kristina; Li, Tianjing; Ssemanda, Elizabeth; Virgili, Gianni; Dickersin, Kay
2016-04-01
Are existing systematic reviews of interventions for age-related macular degeneration incorporated into clinical practice guidelines? High-quality systematic reviews should be used to underpin evidence-based clinical practice guidelines and clinical care. We examined the reliability of systematic reviews of interventions for age-related macular degeneration (AMD) and described the main findings of reliable reviews in relation to clinical practice guidelines. Eligible publications were systematic reviews of the effectiveness of treatment interventions for AMD. We searched a database of systematic reviews in eyes and vision without language or date restrictions; the database was up to date as of May 6, 2014. Two authors independently screened records for eligibility and abstracted and assessed the characteristics and methods of each review. We classified reviews as reliable when they reported eligibility criteria, comprehensive searches, methodologic quality of included studies, appropriate statistical methods for meta-analysis, and conclusions based on results. We mapped treatment recommendations from the American Academy of Ophthalmology (AAO) Preferred Practice Patterns (PPPs) for AMD to systematic reviews and citations of reliable systematic reviews to support each treatment recommendation. Of 1570 systematic reviews in our database, 47 met inclusion criteria; most targeted neovascular AMD and investigated anti-vascular endothelial growth factor (VEGF) interventions, dietary supplements, or photodynamic therapy. We classified 33 (70%) reviews as reliable. The quality of reporting varied, with criteria for reliable reporting met more often by Cochrane reviews and reviews whose authors disclosed conflicts of interest. Anti-VEGF agents and photodynamic therapy were the only interventions identified as effective by reliable reviews. Of 35 treatment recommendations extracted from the PPPs, 15 could have been supported with reliable systematic reviews; however, only 1 recommendation cited a reliable intervention systematic review. No reliable systematic review was identified for 20 treatment recommendations, highlighting areas of evidence gaps. For AMD, reliable systematic reviews exist for many treatment recommendations in the AAO PPPs and should be cited to support these recommendations. We also identified areas where no high-level evidence exists. Mapping clinical practice guidelines to existing systematic reviews is one way to highlight areas where evidence generation or evidence synthesis is either available or needed. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
Mass and Reliability System (MaRS)
NASA Technical Reports Server (NTRS)
Barnes, Sarah
2016-01-01
The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions
Database Systems. Course Three. Information Systems Curriculum.
ERIC Educational Resources Information Center
O'Neil, Sharon Lund; Everett, Donna R.
This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
NASA Astrophysics Data System (ADS)
Haddam, N. A.; Michel, E.; Siani, G.; Cortese, G.; Bostock, H. C.; Duprat, J. M.; Isguder, G.
2016-06-01
We present an improved database of planktonic foraminiferal census counts from the Southern Hemisphere oceans (SHO) from 15°S to 64°S. The SHO database combines three existing databases. Using this SHO database, we investigated dissolution biases that might affect faunal census counts. We suggest a depth/ΔCO32- threshold of ~3800 m/ΔCO32- = ~ -10 to -5 µmol/kg for the Pacific and Indian Oceans and ~4000 m/ΔCO32- = ~0 to 10 µmol/kg for the Atlantic Ocean, under which core-top assemblages can be affected by dissolution and are less reliable for paleo-sea surface temperature (SST) reconstructions. We removed all core tops beyond these thresholds from the SHO database. This database has 598 core tops and is able to reconstruct past SST variations from 2° to 25.5°C, with a root mean square error of 1.00°C, for annual temperatures. To inspect how dissolution affects SST reconstruction quality, we tested the data base with two "leave-one-out" tests, with and without the deep core tops. We used this database to reconstruct summer SST (SSST) over the last 20 ka, using the Modern Analog Technique method, on the Southeast Pacific core MD07-3100. This was compared to the SSST reconstructed using the three databases used to compile the SHO database, thus showing that the reconstruction using the SHO database is more reliable, as its dissimilarity values are the lowest. The most important aspect here is the importance of a bias-free, geographic-rich database. We leave this data set open-ended to future additions; the new core tops must be carefully selected, with their chronological frameworks, and evidence of dissolution assessed.
CERTS: Consortium for Electric Reliability Technology Solutions - Research Highlights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Joseph
2003-07-30
Historically, the U.S. electric power industry was vertically integrated, and utilities were responsible for system planning, operations, and reliability management. As the nation moves to a competitive market structure, these functions have been disaggregated, and no single entity is responsible for reliability management. As a result, new tools, technologies, systems, and management processes are needed to manage the reliability of the electricity grid. However, a number of simultaneous trends prevent electricity market participants from pursuing development of these reliability tools: utilities are preoccupied with restructuring their businesses, research funding has declined, and the formation of Independent System Operators (ISOs) andmore » Regional Transmission Organizations (RTOs) to operate the grid means that control of transmission assets is separate from ownership of these assets; at the same time, business uncertainty, and changing regulatory policies have created a climate in which needed investment for transmission infrastructure and tools for reliability management has dried up. To address the resulting emerging gaps in reliability R&D, CERTS has undertaken much-needed public interest research on reliability technologies for the electricity grid. CERTS' vision is to: (1) Transform the electricity grid into an intelligent network that can sense and respond automatically to changing flows of power and emerging problems; (2) Enhance reliability management through market mechanisms, including transparency of real-time information on the status of the grid; (3) Empower customers to manage their energy use and reliability needs in response to real-time market price signals; and (4) Seamlessly integrate distributed technologies--including those for generation, storage, controls, and communications--to support the reliability needs of both the grid and individual customers.« less
The reliability-quality relationship for quality systems and quality risk management.
Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M
2012-01-01
Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality systems and quality risk management.
DICOM-compliant PACS with CD-based image archival
NASA Astrophysics Data System (ADS)
Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.
1998-07-01
This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.
NASA Astrophysics Data System (ADS)
Brady, M.
2016-12-01
During his historic trip to Alaska in 2015, U.S. President Barack Obama announced a collaborative effort to update maps of the Arctic region in anticipation of increased maritime access and resource development and to support climate resilience. Included in this effort is development of an Arctic-wide satellite-based digital elevation model (DEM) to provide a baseline to monitor landscape change such as coastal erosion. Focusing in Alaska's North Slope, an objective of this study is to transform emerging Arctic environment spatial data products including the new DEM into information that can support local level planning and decision-making in the face of extreme coastal erosion and related environmental threats. In pursuit of this, in 2016, 4 workshops were held in three North Slope villages highly exposed to coastal erosion. The first workshop with approximately 10 managers in Barrow solicited feedback on an erosion risk database developed in a previous research stage and installed onto the North Slope's planning Web portal. The database includes a physical risk indicator based on factors such as historical erosion and effects of sea ice loss summarized at asset locations. After a demonstration of the database, participants discussed usability aspects such as data reliability. The focus of the mapping workshops in Barrow and two smaller villages Wainwright and Kaktovik was to verify and expand the risk database by interactively mapping erosion observations and community asset impacts. Using coded stickers and paper maps of the shoreline showing USGS erosion rates, a total of 50 participants provided feedback on erosion data accuracy. Approximately 25 of the total 50 participants were elders and hunters who also provided in-depth community risk information. The workshop with managers confirmed physical risk factors used in the risk database, and revealed that the information may be relied upon to support some development decisions and better engage developers about erosion risks. Results from the three mapping workshops revealed that most participants agree that the USGS data are consistent with their observations. Also, in-depth contributions from elders and hunters confirmed that there is a need to monitor loss of specific assets including hunting grounds and historic places and associated community impacts.
Jiang, Chaozhe; Xu, Yibo; Wen, Chao; Chen, Dilin
2017-12-19
Anti-runaway prevention of rolling stocks at a railway station is essential in railway safety management. The traditional track skates for anti-runaway prevention of rolling stocks have some disadvantages since they are operated and monitored completely manually. This paper describes an anti-runaway prevention system (ARPS) based on intelligent track skates equipped with sensors and real-time monitoring and management system. This system, which has been updated from the traditional track skates, comprises four parts: intelligent track skates, a signal reader, a database station, and a monitoring system. This system can monitor the real-time situation of track skates without changing their workflow for anti-runaway prevention, and thus realize the integration of anti-runaway prevention information management. This system was successfully tested and practiced at Sunjia station in Harbin Railway Bureau in 2014, and the results confirmed that the system showed 100% accuracy in reflecting the usage status of the track skates. The system could meet practical demands, as it is highly reliable and supports long-distance communication.
Williams, Stephen P; Malik, Humza T; Nicolay, Christopher R; Chaturvedi, Sankalp; Darzi, Ara; Purkayastha, Sanjay
2018-04-01
In response to an increasing body of evidence on the importance of employee health and well-being (HWB) within health care, there has been a shift in focus from both policymakers and individual organizations toward improving health care employee HWB. However, there is something of a paucity of evidence regarding the impact and value of specific HWB interventions within a health care setting. The aim of this article was to systematically review the literature on this topic utilizing the EMBASE, Global Health, Health Management Information Consortium, MEDLINE, and PsycINFO databases. Forty-four articles were identified and, due to a large degree of heterogeneity, were considered under different headings as to the type of intervention employed: namely, those evaluating changing ways of working, physical health promotion, complementary and alternative medicine, and stress management interventions, and those utilizing multimodal interventions. Our results consider both the efficacy and reliability of each intervention in turn and reflect on the importance of careful study design and measure selection when evaluating the impact of HWB interventions. © 2017 American Society for Healthcare Risk Management of the American Hospital Association.
Jiang, Chaozhe; Xu, Yibo; Chen, Dilin
2017-01-01
Anti-runaway prevention of rolling stocks at a railway station is essential in railway safety management. The traditional track skates for anti-runaway prevention of rolling stocks have some disadvantages since they are operated and monitored completely manually. This paper describes an anti-runaway prevention system (ARPS) based on intelligent track skates equipped with sensors and real-time monitoring and management system. This system, which has been updated from the traditional track skates, comprises four parts: intelligent track skates, a signal reader, a database station, and a monitoring system. This system can monitor the real-time situation of track skates without changing their workflow for anti-runaway prevention, and thus realize the integration of anti-runaway prevention information management. This system was successfully tested and practiced at Sunjia station in Harbin Railway Bureau in 2014, and the results confirmed that the system showed 100% accuracy in reflecting the usage status of the track skates. The system could meet practical demands, as it is highly reliable and supports long-distance communication. PMID:29257108
Kalariya, Nilesh; Brassil, Kelly
2015-12-01
After allogeneic hematopoietic stem cell transplantation, one of the major barriers to clinical management of acute graft-versus-host disease (aGVHD) is a lack of reliable and validated noninvasive tests for diagnosis and prognosis. Proteomic studies have indicated a strong correlation between the level of certain body fluid proteins and clinical outcomes after aGVHD. Specific proteins have been identified that could be robust biomarkers for overall prognosis or for differential diagnosis of target organs in aGVHD. The authors aimed to evaluate the literature related to proteomic biomarkers that are indicated in the occurrence, severity, and management of aGVHD. PubMed and CINAHL® databases were searched for articles published from January 2004 to June 2014. Eight articles matching the inclusion criteria were identified, and the findings of these articles were summarized and their clinical implications noted. Proteomics appears to be a promising tool to assist oncology nurses and nurse practitioners with patient education, develop personalized plans of care to reduce morbidity, initiate communication regarding end-of-life decisions, and improve overall nursing management of the population of patients with aGVHD.
Burnham, J F; Shearer, B S; Wall, J C
1992-01-01
Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424
Creating databases for biological information: an introduction.
Stein, Lincoln
2013-06-01
The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.
Earthquake forecasting studies using radon time series data in Taiwan
NASA Astrophysics Data System (ADS)
Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong
2017-04-01
For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.
Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database
Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.
2010-01-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Federated web-accessible clinical data management within an extensible neuroimaging database.
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
2010-12-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.
A Framework For Fault Tolerance In Virtualized Servers
2016-06-01
effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data
Implementation of a data management software system for SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, Kenneth
1986-01-01
The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.
78 FR 22773 - Revisions to Reliability Standard for Transmission Vegetation Management; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-17
...; Order No. 777] Revisions to Reliability Standard for Transmission Vegetation Management; Correction... other requirements the North American Electric Reliability Corporation (NERC) needs to submit when modifying certain Reliability Standards. DATES: Effective on May 28, 2013. FOR FURTHER INFORMATION CONTACT...
hEIDI: An Intuitive Application Tool To Organize and Treat Large-Scale Proteomics Data.
Hesse, Anne-Marie; Dupierris, Véronique; Adam, Claire; Court, Magali; Barthe, Damien; Emadali, Anouk; Masselon, Christophe; Ferro, Myriam; Bruley, Christophe
2016-10-07
Advances in high-throughput proteomics have led to a rapid increase in the number, size, and complexity of the associated data sets. Managing and extracting reliable information from such large series of data sets require the use of dedicated software organized in a consistent pipeline to reduce, validate, exploit, and ultimately export data. The compilation of multiple mass-spectrometry-based identification and quantification results obtained in the context of a large-scale project represents a real challenge for developers of bioinformatics solutions. In response to this challenge, we developed a dedicated software suite called hEIDI to manage and combine both identifications and semiquantitative data related to multiple LC-MS/MS analyses. This paper describes how, through a user-friendly interface, hEIDI can be used to compile analyses and retrieve lists of nonredundant protein groups. Moreover, hEIDI allows direct comparison of series of analyses, on the basis of protein groups, while ensuring consistent protein inference and also computing spectral counts. hEIDI ensures that validated results are compliant with MIAPE guidelines as all information related to samples and results is stored in appropriate databases. Thanks to the database structure, validated results generated within hEIDI can be easily exported in the PRIDE XML format for subsequent publication. hEIDI can be downloaded from http://biodev.extra.cea.fr/docs/heidi .
Separation and confirmation of showers
NASA Astrophysics Data System (ADS)
Neslušan, L.; Hajduková, M.
2017-02-01
Aims: Using IAU MDC photographic, IAU MDC CAMS video, SonotaCo video, and EDMOND video databases, we aim to separate all provable annual meteor showers from each of these databases. We intend to reveal the problems inherent in this procedure and answer the question whether the databases are complete and the methods of separation used are reliable. We aim to evaluate the statistical significance of each separated shower. In this respect, we intend to give a list of reliably separated showers rather than a list of the maximum possible number of showers. Methods: To separate the showers, we simultaneously used two methods. The use of two methods enables us to compare their results, and this can indicate the reliability of the methods. To evaluate the statistical significance, we suggest a new method based on the ideas of the break-point method. Results: We give a compilation of the showers from all four databases using both methods. Using the first (second) method, we separated 107 (133) showers, which are in at least one of the databases used. These relatively low numbers are a consequence of discarding any candidate shower with a poor statistical significance. Most of the separated showers were identified as meteor showers from the IAU MDC list of all showers. Many of them were identified as several of the showers in the list. This proves that many showers have been named multiple times with different names. Conclusions: At present, a prevailing share of existing annual showers can be found in the data and confirmed when we use a combination of results from large databases. However, to gain a complete list of showers, we need more-complete meteor databases than the most extensive databases currently are. We also still need a more sophisticated method to separate showers and evaluate their statistical significance. Tables A.1 and A.2 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A40
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yubin; Shankar, Mallikarjun; Park, Byung H.
Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less
Groundwater availability study for Guam; goals, approach, products, and schedule of activities
Gingerich, Stephen B.; Jenson, John W.
2010-01-01
An expected significant population increase on Guam has raised concern about the sustainability of groundwater resources. In response, the U.S. Geological Survey (USGS), in collaboration with the University of Guam's Water and Environmental Research Institute of the Western Pacific (WERI) and with funding from the U.S. Marine Corps (USMC), is conducting a 3.5-year study to advance understanding of regional groundwater dynamics in the Northern Guam Lens Aquifer, provide a new estimate of groundwater recharge, and develop a numerical groundwater flow and transport model for northern Guam. Results of the study, including two USGS reports and a well database, will provide more reliable evaluations of the potential effects of groundwater production and help guide sustainable management of this critical resource.
[Big data, generalities and integration in radiotherapy].
Le Fèvre, C; Poty, L; Noël, G
2018-02-01
The many advances in data collection computing systems (data collection, database, storage), diagnostic and therapeutic possibilities are responsible for an increase and a diversification of available data. Big data offers the capacities, in the field of health, to accelerate the discoveries and to optimize the management of patients by combining a large volume of data and the creation of therapeutic models. In radiotherapy, the development of big data is attractive because data are very numerous et heterogeneous (demographics, radiomics, genomics, radiogenomics, etc.). The expectation would be to predict the effectiveness and tolerance of radiation therapy. With these new concepts, still at the preliminary stage, it is possible to create a personalized medicine which is always more secure and reliable. Copyright © 2017. Published by Elsevier SAS.
Churcher, Frances P; Mills, Jeremy F; Forth, Adelle E
2016-08-01
Over the past few decades many structured risk appraisal measures have been created to respond to this need. The Two-Tiered Violence Risk Estimates Scale (TTV) is a measure designed to integrate both an actuarial estimate of violence risk with critical risk management indicators. The current study examined interrater reliability and the predictive validity of the TTV in a sample of violent offenders (n = 120) over an average follow-up period of 17.75 years. The TTV was retrospectively scored and compared with the Violence Risk Appraisal Guide (VRAG), the Statistical Information of Recidivism Scale-Revised (SIR-R1), and the Psychopathy Checklist-Revised (PCL-R). Approximately 53% of the sample reoffended violently, with an overall recidivism rate of 74%. Although the VRAG was the strongest predictor of violent recidivism in the sample, the Actuarial Risk Estimates (ARE) scale of the TTV produced a small, significant effect. The Risk Management Indicators (RMI) produced nonsignificant area under the curve (AUC) values for all recidivism outcomes. Comparisons between measures using AUC values and Cox regression showed that there were no statistical differences in predictive validity. The results of this research will be used to inform the validation and reliability literature on the TTV, and will contribute to the overall risk assessment literature. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Edens, John F; Penson, Brittany N; Ruchensky, Jared R; Cox, Jennifer; Smith, Shannon Toney
2016-12-01
Published research suggests that most violence risk assessment tools have relatively high levels of interrater reliability, but recent evidence of inconsistent scores among forensic examiners in adversarial settings raises concerns about the "field reliability" of such measures. This study specifically examined the reliability of Violence Risk Appraisal Guide (VRAG) scores in Canadian criminal cases identified in the legal database, LexisNexis. Over 250 reported cases were located that made mention of the VRAG, with 42 of these cases containing 2 or more scores that could be submitted to interrater reliability analyses. Overall, scores were skewed toward higher risk categories. The intraclass correlation (ICCA1) was .66, with pairs of forensic examiners placing defendants into the same VRAG risk "bin" in 68% of the cases. For categorical risk statements (i.e., low, moderate, high), examiners provided converging assessment results in most instances (86%). In terms of potential predictors of rater disagreement, there was no evidence for adversarial allegiance in our sample. Rater disagreement in the scoring of 1 VRAG item (Psychopathy Checklist-Revised; Hare, 2003), however, strongly predicted rater disagreement in the scoring of the VRAG (r = .58). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FOREST SERVICE... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS NATIONAL PARK... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases...
NASA Astrophysics Data System (ADS)
Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.
NASA Astrophysics Data System (ADS)
Velazquez, Enrique Israel
Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.
NASA Astrophysics Data System (ADS)
Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping
2018-03-01
System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.
interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and
Databases for multilevel biophysiology research available at Physiome.jp.
Asai, Yoshiyuki; Abe, Takeshi; Li, Li; Oka, Hideki; Nomura, Taishin; Kitano, Hiroaki
2015-01-01
Physiome.jp (http://physiome.jp) is a portal site inaugurated in 2007 to support model-based research in physiome and systems biology. At Physiome.jp, several tools and databases are available to support construction of physiological, multi-hierarchical, large-scale models. There are three databases in Physiome.jp, housing mathematical models, morphological data, and time-series data. In late 2013, the site was fully renovated, and in May 2015, new functions were implemented to provide information infrastructure to support collaborative activities for developing models and performing simulations within the database framework. This article describes updates to the databases implemented since 2013, including cooperation among the three databases, interactive model browsing, user management, version management of models, management of parameter sets, and interoperability with applications.
Predicting infection risk of airborne foot-and-mouth disease.
Schley, David; Burgin, Laura; Gloster, John
2009-05-06
Foot-and-mouth disease is a highly contagious disease of cloven-hoofed animals, the control and eradication of which is of significant worldwide socio-economic importance. The virus may spread by direct contact between animals or via fomites as well as through airborne transmission, with the latter being the most difficult to control. Here, we consider the risk of infection to flocks or herds from airborne virus emitted from a known infected premises. We show that airborne infection can be predicted quickly and with a good degree of accuracy, provided that the source of virus emission has been determined and reliable geo-referenced herd data are available. A simple model provides a reliable tool for estimating risk from known sources and for prioritizing surveillance and detection efforts. The issue of data information management systems was highlighted as a lesson to be learned from the official inquiry into the UK 2007 foot-and-mouth outbreak: results here suggest that the efficacy of disease control measures could be markedly improved through an accurate livestock database incorporating flock/herd size and location, which would enable tactical as well as strategic modelling.
Creating databases for biological information: an introduction.
Stein, Lincoln
2002-08-01
The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, and relational databases, as well as ACeDB. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system.
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
Utilisation, Reliability and Validity of Clinical Evaluation Exercise in Otolaryngology Training.
Awad, Z; Hayden, L; Muthuswamy, K; Tolley, N S
2015-10-01
To investigate the utilisation, reliability and validity of clinical evaluation exercise (CEX) in otolaryngology training. Retrospective database analysis. Online assessment database. We analysed all CEXs submitted by north London core (CT) and speciality trainees (ST) in otolaryngology from 2010 to 2013. Internal consistency of the 7 CEX items rated as either O: outstanding, S: satisfactory or D: development required. Overall performance rating (pS) of 1-4 assessed against completion of training level. Receiver operating characteristic was used to describe CEX sensitivity and specificity. Overall score (cS), pS and the number of 'D'-rated items were used to investigate construct validity. One thousand one hundred and sixty CEXs from 45 trainees were included. CEX showed good internal consistency (Cronbach's alpha= 0.85). CEX was highly sensitive (99%), yet not specific (6%). cS and pS for ST was higher than CT (99.1% ± 0.4 versus 96.6% ± 0.8 and 3.06 ± 0.05 versus 1.92 ± 0.04, respectively P < 0.001). pS showed a significant stepwise increase from CT1 to ST6 (P < 0.001). In contrast, cS only showed improvement up to ST4 (P = 0.025). The most frequently utilised item 'management and follow-up planning' was found to be the best predictor of cS and pS (rs = +0.69 and +0.21, respectively). CEX is reliable in assessing early years otolaryngology trainees in clinical examination, but not at higher level. It has the potential to be used in a summative capacity in selecting trainees for ST positions. This would also encourage trainees to master all domains of otolaryngology clinical examination by end of CT. © 2015 John Wiley & Sons Ltd.
A spatial-temporal system for dynamic cadastral management.
Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie
2006-03-01
A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.
Modification Site Localization in Peptides.
Chalkley, Robert J
2016-01-01
There are a large number of search engines designed to take mass spectrometry fragmentation spectra and match them to peptides from proteins in a database. These peptides could be unmodified, but they could also bear modifications that were added biologically or during sample preparation. As a measure of reliability for the peptide identification, software normally calculates how likely a given quality of match could have been achieved at random, most commonly through the use of target-decoy database searching (Elias and Gygi, Nat Methods 4(3): 207-214, 2007). Matching the correct peptide but with the wrong modification localization is not a random match, so results with this error will normally still be assessed as reliable identifications by the search engine. Hence, an extra step is required to determine site localization reliability, and the software approaches to measure this are the subject of this part of the chapter.
Numerical Databases: Their Vital Role in Information Science.
ERIC Educational Resources Information Center
Carter, G. C.
1985-01-01
In this first of two installments, a description of the interactions of numerical databases (NDBs) for science with the various community sectors highlights data evaluation and the roles that NDBs will likely play. Twenty-four studies and articles dealing with needs for reliable physical, chemical, and mechanical property data are noted. (EJS)
Mahmood, Eitezaz; Matyal, Robina; Mueller, Ariel; Mahmood, Feroze; Tung, Avery; Montealegre-Gallegos, Mario; Schermerhorn, Marc; Shahul, Sajid
2018-03-01
In some institutions, the current blood ordering practice does not discriminate minimally invasive endovascular aneurysm repair (EVAR) from open procedures, with consequent increasing costs and likelihood of blood product wastage for EVARs. This limitation in practice can possibly be addressed with the development of a reliable prediction model for transfusion risk in EVAR patients. We used the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database to create a model for prediction of intraoperative blood transfusion occurrence in patients undergoing EVAR. Afterward, we tested our predictive model on the Vascular Study Group of New England (VSGNE) database. We used the ACS NSQIP database for patients who underwent EVAR from 2011 to 2013 (N = 4709) as our derivation set for identifying a risk index for predicting intraoperative blood transfusion. We then developed a clinical risk score and validated this model using patients who underwent EVAR from 2003 to 2014 in the VSGNE database (N = 4478). The transfusion rates were 8.4% and 6.1% for the ACS NSQIP (derivation set) and VSGNE (validation) databases, respectively. Hemoglobin concentration, American Society of Anesthesiologists class, age, and aneurysm diameter predicted blood transfusion in the derivation set. When it was applied on the validation set, our risk index demonstrated good discrimination in both the derivation and validation set (C statistic = 0.73 and 0.70, respectively) and calibration using the Hosmer-Lemeshow test (P = .27 and 0.31) for both data sets. We developed and validated a risk index for predicting the likelihood of intraoperative blood transfusion in EVAR patients. Implementation of this index may facilitate the blood management strategies specific for EVAR. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Bowker, S L; Savu, A; Donovan, L E; Johnson, J A; Kaul, P
2017-06-01
To examine the validity of International Classification of Disease, version 10 (ICD-10) codes for gestational diabetes mellitus in administrative databases (outpatient and inpatient), and in a clinical perinatal database (Alberta Perinatal Health Program), using laboratory data as the 'gold standard'. Women aged 12-54 years with in-hospital, singleton deliveries between 1 October 2008 and 31 March 2010 in Alberta, Canada were included in the study. A gestational diabetes diagnosis was defined in the laboratory data as ≥2 abnormal values on a 75-g oral glucose tolerance test or a 50-g glucose screen ≥10.3 mmol/l. Of 58 338 pregnancies, 2085 (3.6%) met gestational diabetes criteria based on laboratory data. The gestational diabetes rates in outpatient only, inpatient only, outpatient or inpatient combined, and Alberta Perinatal Health Program databases were 5.2% (3051), 4.8% (2791), 5.8% (3367) and 4.8% (2825), respectively. Although the outpatient or inpatient combined data achieved the highest sensitivity (92%) and specificity (97%), it was associated with a positive predictive value of only 57%. The majority of the false-positives (78%), however, had one abnormal value on oral glucose tolerance test, corresponding to a diagnosis of impaired glucose tolerance in pregnancy. The ICD-10 codes for gestational diabetes in administrative databases, especially when outpatient and inpatient databases are combined, can be used to reliably estimate the burden of the disease at the population level. Because impaired glucose tolerance in pregnancy and gestational diabetes may be managed similarly in clinical practice, impaired glucose tolerance in pregnancy is often coded as gestational diabetes. © 2016 Diabetes UK.
Database for content of mercury in Polish brown coal
NASA Astrophysics Data System (ADS)
Jastrząb, Krzysztof
2018-01-01
Poland is rated among the countries with largest level of mercury emission in Europe. According to information provided by the National Centre for Balancing and Management of Emissions (KOBiZE) more than 10.5 tons of mercury and its compounds were emitted into the atmosphere in 2015 from the area of Poland. Within the scope of the BazaHg project lasting from 2014 to 2015 and co-financed from the National Centre of Research and Development (NCBiR) a database was set up with specification of mercury content in Polish hard steam coal, coking coal and brown coal (lignite) grades. With regard to domestic brown coal the database comprises information on coal grades from Brown Coal Mines of `Bełchatów', `Adamów', `Turów' and `Sieniawa'. Currently the database contains 130 records with parameters of brown coal, where each record stands for technical analysis (content of moisture, ash and volatile particles), elemental analysis (CHNS), content of chlorine and mercury as well as net calorific value and combustion heat. Content of mercury in samples of brown coal grades under test ranged from 44 to 985 μg of Hg/kg with the average level of 345 μg of Hg/kg. The established database makes up a reliable and trustworthy source of information about content of mercury in Polish fossils. The foregoing details completed with information about consumption of coal by individual electric power stations and multiplied by appropriate emission coefficients may serve as the background to establish loads of mercury emitted into atmosphere from individual stations and by the entire sector of power engineering in total. It will also enable Polish central organizations and individual business entities to implement reasonable policy with respect of mercury emission into atmosphere.
Bookey-Bassett, Sue; Markle-Reid, Maureen; McKey, Colleen; Akhtar-Danesh, Noori
2016-01-01
It is acknowledged internationally that chronic disease management (CDM) for community-living older adults (CLOA) is an increasingly complex process. CDM for older adults, who are often living with multiple chronic conditions, requires coordination of various health and social services. Coordination is enabled through interprofessional collaboration (IPC) among individual providers, community organizations, and health sectors. Measuring IPC is complicated given there are multiple conceptualisations and measures of IPC. A literature review of several healthcare, psychological, and social science electronic databases was conducted to locate instruments that measure IPC at the team level and have published evidence of their reliability and validity. Five instruments met the criteria and were critically reviewed to determine their strengths and limitations as they relate to CDM for CLOA. A comparison of the characteristics, psychometric properties, and overall concordance of each instrument with salient attributes of IPC found the Collaborative Practice Assessment Tool to be the most appropriate instrument for measuring IPC for CDM in CLOA.
ERIC Educational Resources Information Center
Deutsch, Donald R.
This report describes a research effort that was carried out over a period of several years to develop and demonstrate a methodology for evaluating proposed Database Management System designs. The major proposition addressed by this study is embodied in the thesis statement: Proposed database management system designs can be evaluated best through…
NASA Technical Reports Server (NTRS)
1990-01-01
In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.
NBIC: National Ballast Information Clearinghouse
Smithsonian Environmental Research Center Logo US Coast Guard Logo Submit BW Report | Search NBIC Database / Database Manager: Tami Huber Senior Analyst / Ecologist: Mark Minton Data Managers Ashley Arnwine Jessica Hardee Amanda Reynolds Database Design and Programming / Application Programming: Paul Winterbauer
O'Grady, Michael G; Dusing, Stacey C
2015-01-01
Play is vital for development. Infants and children learn through play. Traditional standardized developmental tests measure whether a child performs individual skills within controlled environments. Play-based assessments can measure skill performance during natural, child-driven play. The purpose of this study was to systematically review reliability, validity, and responsiveness of all play-based assessments that quantify motor and cognitive skills in children from birth to 36 months of age. Studies were identified from a literature search using PubMed, ERIC, CINAHL, and PsycINFO databases and the reference lists of included papers. Included studies investigated reliability, validity, or responsiveness of play-based assessments that measured motor and cognitive skills for children to 36 months of age. Two reviewers independently screened 40 studies for eligibility and inclusion. The reviewers independently extracted reliability, validity, and responsiveness data. They examined measurement properties and methodological quality of the included studies. Four current play-based assessment tools were identified in 8 included studies. Each play-based assessment tool measured motor and cognitive skills in a different way during play. Interrater reliability correlations ranged from .86 to .98 for motor development and from .23 to .90 for cognitive development. Test-retest reliability correlations ranged from .88 to .95 for motor development and from .45 to .91 for cognitive development. Structural validity correlations ranged from .62 to .90 for motor development and from .42 to .93 for cognitive development. One study assessed responsiveness to change in motor development. Most studies had small and poorly described samples. Lack of transparency in data management and statistical analysis was common. Play-based assessments have potential to be reliable and valid tools to assess cognitive and motor skills, but higher-quality research is needed. Psychometric properties should be considered for each play-based assessment before it is used in clinical and research practice. © 2015 American Physical Therapy Association.
AGRICULTURAL BEST MANAGEMENT PRACTICE EFFECTIVENESS DATABASE
Resource Purpose:The Agricultural Best Management Practice Effectiveness Database contains the results of research projects which have collected water quality data for the purpose of determining the effectiveness of agricultural management practices in reducing pollutants ...
Evaluation of relational and NoSQL database architectures to manage genomic annotations.
Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard
2016-12-01
While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.
Lindsley, Kristina; Li, Tianjing; Ssemanda, Elizabeth; Virgili, Gianni; Dickersin, Kay
2016-01-01
Topic Are existing systematic reviews of interventions for age-related macular degeneration incorporated into clinical practice guidelines? Clinical relevance High-quality systematic reviews should be used to underpin evidence-based clinical practice guidelines and clinical care. We have examined the reliability of systematic reviews of interventions for age-related macular degeneration (AMD) and described the main findings of reliable reviews in relation to clinical practice guidelines. Methods Eligible publications are systematic reviews of the effectiveness of treatment interventions for AMD. We searched a database of systematic reviews in eyes and vision and employed no language or date restrictions; the database is up-to-date as of May 6, 2014. Two authors independently screened records for eligibility and abstracted and assessed the characteristics and methods of each review. We classified reviews as “reliable” when they reported eligibility criteria, comprehensive searches, appraisal of methodological quality of included studies, appropriate statistical methods for meta-analysis, and conclusions based on results. We mapped treatment recommendations from the American Academy of Ophthalmology Preferred Practice Patterns (AAO PPP) for AMD to the identified systematic reviews and assessed whether any reliable systematic review was cited or could have been cited to support each treatment recommendation. Results Of 1,570 systematic reviews in our database, 47 met our inclusion criteria. Most of the systematic reviews targeted neovascular AMD and investigated anti-vascular endothelial growth factor (anti-VEGF) interventions, dietary supplements or photodynamic therapy. We classified over two-thirds (33/47) of the reports as reliable. The quality of reporting varied, with criteria for reliable reporting met more often for Cochrane reviews and for reviews whose authors disclosed conflicts of interest. Although most systematic reviews were reliable, anti-VEGF agents and photodynamic therapy were the only interventions identified as effective by reliable reviews. Of 35 treatment recommendations extracted from the AAO PPP, 15 could have been supported with reliable systematic reviews; however, only one recommendation had an accompanying intervention systematic review citation, which we assessed as a reliable systematic review. No reliable systematic review was identified for 20 treatment recommendations, highlighting areas of evidence gaps. Conclusions For AMD, reliable systematic reviews exist for many treatment recommendations in the AAO PPP and should be used to support these recommendations. We also identified areas where no high-level evidence exists. Mapping clinical practice guidelines to existing systematic reviews is one way to highlight areas where evidence generation or evidence synthesis is either available or needed. PMID:26804762
Harju, Inka; Lange, Christoph; Kostrzewa, Markus; Maier, Thomas; Rantakokko-Jalava, Kaisu; Haanperä, Marjo
2017-03-01
Reliable distinction of Streptococcus pneumoniae and viridans group streptococci is important because of the different pathogenic properties of these organisms. Differentiation between S. pneumoniae and closely related Sreptococcus mitis species group streptococci has always been challenging, even when using such modern methods as 16S rRNA gene sequencing or matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. In this study, a novel algorithm combined with an enhanced database was evaluated for differentiation between S. pneumoniae and S. mitis species group streptococci. One hundred one clinical S. mitis species group streptococcal strains and 188 clinical S. pneumoniae strains were identified by both the standard MALDI Biotyper database alone and that combined with a novel algorithm. The database update from 4,613 strains to 5,627 strains drastically improved the differentiation of S. pneumoniae and S. mitis species group streptococci: when the new database version containing 5,627 strains was used, only one of the 101 S. mitis species group isolates was misidentified as S. pneumoniae , whereas 66 of them were misidentified as S. pneumoniae when the earlier 4,613-strain MALDI Biotyper database version was used. The updated MALDI Biotyper database combined with the novel algorithm showed even better performance, producing no misidentifications of the S. mitis species group strains as S. pneumoniae All S. pneumoniae strains were correctly identified as S. pneumoniae with both the standard MALDI Biotyper database and the standard MALDI Biotyper database combined with the novel algorithm. This new algorithm thus enables reliable differentiation between pneumococci and other S. mitis species group streptococci with the MALDI Biotyper. Copyright © 2017 American Society for Microbiology.
Risk and Reliability of Infrastructure Asset Management Workshop
2006-08-01
of assets within the portfolio for use in Risk and Reliability analysis ... US Army Corps of Engineers assesses its Civil Works infrastructure and applies risk and reliability in the management of that infrastructure. The ... the Corps must complete assessments across its portfolio of major assets before risk management can be used in decision making. Effective risk
Application of cloud database in the management of clinical data of patients with skin diseases.
Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning
2015-04-01
To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FISH AND... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
A web based relational database management system for filariasis control
Murty, Upadhyayula Suryanarayana; Kumar, Duvvuri Venkata Rama Satya; Sriram, Kumaraswamy; Rao, Kadiri Madhusudhan; Bhattacharyulu, Chakravarthula Hayageeva Narasimha Venakata; Praveen, Bhoopathi; Krishna, Amirapu Radha
2005-01-01
The present study describes a RDBMS (relational database management system) for the effective management of Filariasis, a vector borne disease. Filariasis infects 120 million people from 83 countries. The possible re-emergence of the disease and the complexity of existing control programs warrant the development of new strategies. A database containing comprehensive data associated with filariasis finds utility in disease control. We have developed a database containing information on the socio-economic status of patients, mosquito collection procedures, mosquito dissection data, filariasis survey report and mass blood data. The database can be searched using a user friendly web interface. Availability http://www.webfil.org (login and password can be obtained from the authors) PMID:17597846
Velonakis, E; Mantas, J; Mavrikakis, I
2006-01-01
The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.
Integrating RFID technique to design mobile handheld inventory management system
NASA Astrophysics Data System (ADS)
Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung
2008-04-01
An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.
NASA Astrophysics Data System (ADS)
Bartolini, S.; Becerril, L.; Martí, J.
2014-11-01
One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.
Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze
2013-04-01
Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Analysis and preliminary design of Kunming land use and planning management information system
NASA Astrophysics Data System (ADS)
Li, Li; Chen, Zhenjie
2007-06-01
This article analyzes Kunming land use planning and management information system from the system building objectives and system building requirements aspects, nails down the system's users, functional requirements and construction requirements. On these bases, the three-tier system architecture based on C/S and B/S is defined: the user interface layer, the business logic layer and the data services layer. According to requirements for the construction of land use planning and management information database derived from standards of the Ministry of Land and Resources and the construction program of the Golden Land Project, this paper divides system databases into planning document database, planning implementation database, working map database and system maintenance database. In the design of the system interface, this paper uses various methods and data formats for data transmission and sharing between upper and lower levels. According to the system analysis results, main modules of the system are designed as follows: planning data management, the planning and annual plan preparation and control function, day-to-day planning management, planning revision management, decision-making support, thematic inquiry statistics, planning public participation and so on; besides that, the system realization technologies are discussed from the system operation mode, development platform and other aspects.
Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian
2017-03-01
Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Setting Priorities for Monitoring and Managing Non-native Plants: Toward a Practical Approach.
Koch, Christiane; Jeschke, Jonathan M; Overbeck, Gerhard E; Kollmann, Johannes
2016-09-01
Land managers face the challenge to set priorities in monitoring and managing non-native plant species, as resources are limited and not all non-natives become invasive. Existing frameworks that have been proposed to rank non-native species require extensive information on their distribution, abundance, and impact. This information is difficult to obtain and often not available for many species and regions. National watch or priority lists are helpful, but it is questionable whether they provide sufficient information for environmental management on a regional scale. We therefore propose a decision tree that ranks species based on more simple albeit robust information, but still provides reliable management recommendations. To test the decision tree, we collected and evaluated distribution data from non-native plants in highland grasslands of Southern Brazil. We compared the results with a national list from the Brazilian Invasive Species Database for the state to discuss advantages and disadvantages of the different approaches on a regional scale. Out of 38 non-native species found, only four were also present on the national list. If management would solely rely on this list, many species that were identified as spreading based on the decision tree would go unnoticed. With the suggested scheme, it is possible to assign species to active management, to monitoring, or further evaluation. While national lists are certainly important, management on a regional scale should employ additional tools that adequately consider the actual risk of non-natives to become invasive.
Serials Management by Microcomputer: The Potential of DBMS.
ERIC Educational Resources Information Center
Vogel, J. Thomas; Burns, Lynn W.
1984-01-01
Describes serials management at Philadelphia College of Textiles and Science library via a microcomputer, a file manager called PFS, and a relational database management system called dBase II. Check-in procedures, programing with dBase II, "static" and "active" databases, and claim procedures are discussed. Check-in forms are…
Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf
2018-06-18
In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision. The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database. From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment. Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data. · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.
Tufts Health Sciences Database: Lessons, Issues, and Opportunities.
ERIC Educational Resources Information Center
Lee, Mary Y.; Albright, Susan A.; Alkasab, Tarik; Damassa, David A.; Wang, Paul J.; Eaton, Elizabeth K.
2003-01-01
Describes a seven-year experience with developing the Tufts Health Sciences Database, a database-driven information management system that combines the strengths of a digital library, content delivery tools, and curriculum management. Identifies major effects on teaching and learning. Also addresses issues of faculty development, copyright and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancey, P.; Logg, C.
DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from themore » information.« less
[Establishement for regional pelvic trauma database in Hunan Province].
Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua
2017-04-28
To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry. Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6. Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management. Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.
Database on Demand: insight how to build your own DBaaS
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio
2015-12-01
At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.
2002-12-01
HMMWV family of vehicles, LVS family of vehicles, and the M198 Howitzer). The analysis is limited to an assessment of reliability management issues...AND LOGISTICS INTEGRATION: AN EXAMINATION OF RELIABILITY MANAGEMENT WITHIN THE MARINE CORPS ACQUISITION PROCESS by Marvin L. Norcross, Jr...Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction
The Reliability of Methodological Ratings for speechBITE Using the PEDro-P Scale
ERIC Educational Resources Information Center
Murray, Elizabeth; Power, Emma; Togher, Leanne; McCabe, Patricia; Munro, Natalie; Smith, Katherine
2013-01-01
Background: speechBITE (http://www.speechbite.com) is an online database established in order to help speech and language therapists gain faster access to relevant research that can used in clinical decision-making. In addition to containing more than 3000 journal references, the database also provides methodological ratings on the PEDro-P (an…
Design, Development, and Maintenance of the GLOBE Program Website and Database
NASA Technical Reports Server (NTRS)
Brummer, Renate; Matsumoto, Clifford
2004-01-01
This is a 1-year (Fy 03) proposal to design and develop enhancements, implement improved efficiency and reliability, and provide responsive maintenance for the operational GLOBE (Global Learning and Observations to Benefit the Environment) Program website and database. This proposal is renewable, with a 5% annual inflation factor providing an approximate cost for the out years.
Insertion algorithms for network model database management systems
NASA Astrophysics Data System (ADS)
Mamadolimov, Abdurashid; Khikmat, Saburov
2017-12-01
The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.
Using Statistics for Database Management in an Academic Library.
ERIC Educational Resources Information Center
Hyland, Peter; Wright, Lynne
1996-01-01
Collecting statistical data about database usage by library patrons aids in the management of CD-ROM and database offerings, collection development, and evaluation of training programs. Two approaches to data collection are presented which should be used together: an automated or nonintrusive method which monitors search sessions while the…
Database Software Selection for the Egyptian National STI Network.
ERIC Educational Resources Information Center
Slamecka, Vladimir
The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…
Teaching Database Management System Use in a Library School Curriculum.
ERIC Educational Resources Information Center
Cooper, Michael D.
1985-01-01
Description of database management systems course being taught to students at School of Library and Information Studies, University of California, Berkeley, notes course structure, assignments, and course evaluation. Approaches to teaching concepts of three types of database systems are discussed and systems used by students in the course are…
Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report
2007-02-05
34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about
[SCREENING OF NUTRITIONAL STATUS AMONG ELDERLY PEOPLE AT FAMILY MEDICINE].
Račić, M; Ivković, N; Kusmuk, S
2015-11-01
The prevalence of malnutrition in elderly is high. Malnutrition or risk of malnutrition can be detected by use of nutritional screening or assessment tools. This systematic review aimed to identify tools that would be reliable, valid, sensitive and specific for nutritional status screening in patients older than 65 at family medicine. The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Studies were retrieved using MEDLINE (via Ovid), PubMed and Cochrane Library electronic databases and by manual searching of relevant articles listed in reference list of key publications. The electronic databases were searched using defined key words adapted to each database and using MESH terms. Manual revision of reviews and original articles was performed using Electronic Journals Library. Included studies involved development and validation of screening tools in the community-dwelling elderly population. The tools, subjected to validity and reliability testing for use in the community-dwelling elderly population were Mini Nutritional Assessment (MNA), Mini Nutritional Assessment-Short Form (MNA-SF), Nutrition Screening Initiative (NSI), which includes DETERMINE list, Level I and II Screen, Seniors in the Community: Risk Evaluation for Eating, and Nutrition (SCREEN I and SCREEN II), Subjective Global Assessment (SGA), Nutritional Risk Index (NRI), and Malaysian and South African tool. MNA and MNA-SF appear to have highest reliability and validity for screening of community-dwelling elderly, while the reliability and validity of SCREEN II are good. The authors conclude that whilst several tools have been developed, most have not undergone extensive testing to demonstrate their ability to identify nutritional risk. MNA and MNA-SF have the highest reliability and validity for screening of nutritional status in the community-dwelling elderly, and the reliability and validity of SCREEN II are satisfactory. These instruments also contain all three nutritional status indicators and are practical for use in family medicine. However, the gold standard for screening cannot be set because testing of reliability and continuous validation in the study with a higher level of evidence need to be conducted in family medicine.
The land management and operations database (LMOD)
USDA-ARS?s Scientific Manuscript database
This paper presents the design, implementation, deployment, and application of the Land Management and Operations Database (LMOD). LMOD is the single authoritative source for reference land management and operation reference data within the USDA enterprise data warehouse. LMOD supports modeling appl...
DOT National Transportation Integrated Search
2006-01-01
An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was designed, developed, and implemented at the Virginia Department of Transportation (VDOT) in 2002 to retrieve, manage, archive, and analyze geotechnical da...
The Reliability of College Grades
ERIC Educational Resources Information Center
Beatty, Adam S.; Walmsley, Philip T.; Sackett, Paul R.; Kuncel, Nathan R.; Koch, Amanda J.
2015-01-01
Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year…
A Collaborative Reasoning Maintenance System for a Reliable Application of Legislations
NASA Astrophysics Data System (ADS)
Tamisier, Thomas; Didry, Yoann; Parisot, Olivier; Feltz, Fernand
Decision support systems are nowadays used to disentangle all kinds of intricate situations and perform sophisticated analysis. Moreover, they are applied in areas where the knowledge can be heterogeneous, partially un-formalized, implicit, or diffuse. The representation and management of this knowledge become the key point to ensure the proper functioning of the system and keep an intuitive view upon its expected behavior. This paper presents a generic architecture for implementing knowledge-base systems used in collaborative business, where the knowledge is organized into different databases, according to the usage, persistence and quality of the information. This approach is illustrated with Cadral, a customizable automated tool built on this architecture and used for processing family benefits applications at the National Family Benefits Fund of the Grand-Duchy of Luxembourg.
A review on prognostics approaches for remaining useful life of lithium-ion battery
NASA Astrophysics Data System (ADS)
Su, C.; Chen, H. J.
2017-11-01
Lithium-ion (Li-ion) battery is a core component for various industrial systems, including satellite, spacecraft and electric vehicle, etc. The mechanism of performance degradation and remaining useful life (RUL) estimation correlate closely to the operating state and reliability of the aforementioned systems. Furthermore, RUL prediction of Li-ion battery is crucial for the operation scheduling, spare parts management and maintenance decision for such kinds of systems. In recent years, performance degradation prognostics and RUL estimation approaches have become a focus of the research concerning with Li-ion battery. This paper summarizes the approaches used in Li-ion battery RUL estimation. Three categories are classified accordingly, i.e. model-based approach, data-based approach and hybrid approach. The key issues and future trends for battery RUL estimation are also discussed.
A female black bear denning habitat model using a geographic information system
Clark, J.D.; Hayes, S.G.; Pledger, J.M.
1998-01-01
We used the Mahalanobis distance statistic and a raster geographic information system (GIS) to model potential black bear (Ursus americanus) denning habitat in the Ouachita Mountains of Arkansas. The Mahalanobis distance statistic was used to represent the standard squared distance between sample variates in the GIS database (forest cover type, elevation, slope, aspect, distance to streams, distance to roads, and forest cover richness) and variates at known bear dens. Two models were developed: a generalized model for all den locations and another specific to dens in rock cavities. Differences between habitat at den sites and habitat across the study area were represented in 2 new GIS themes as Mahalanobis distance values. Cells similar to the mean vector derived from the known dens had low Mahalanobis distance values, and dissimilar cells had high values. The reliability of the predictive model was tested by overlaying den locations collected subsequent to original model development on the resultant den habitat themes. Although the generalized model demonstrated poor reliability, the model specific to rock dens had good reliability. Bears were more likely to choose rock den locations with low Mahalanobis distance values and less likely to choose those with high values. The model can be used to plan the timing and extent of management actions (e.g., road building, prescribed fire, timber harvest) most appropriate for those sites with high or low denning potential.
Mohammadi, Ali; Ahmadi, Maryam; Gharagozlu, Alireza
2016-03-01
Each year, around 1.2 million people die in the road traffic incidents. Reducing traffic accidents requires an exact understanding of the risk factors associated with traffic patterns and behaviors. Properly analyzing these factors calls for a comprehensive system for collecting and processing accident data. The aim of this study was to develop a minimum data set (MDS) for an information management system to study traffic accidents in Iran. This descriptive, cross-sectional study was performed in 2014. Data were collected from the traffic police, trauma centers, medical emergency centers, and via the internet. The investigated resources for this study were forms, databases, and documents retrieved from the internet. Forms and databases were identical, and one sample of each was evaluated. The related internet-sourced data were evaluated in their entirety. Data were collected using three checklists. In order to arrive at a consensus about the data elements, the decision Delphi technique was applied using questionnaires. The content validity and reliability of the questionnaires were assessed by experts' opinions and the test-retest method, respectively. An (MDS) of a traffic accident information management system was assigned to three sections: a minimum data set for traffic police with six classes, including 118 data elements; a trauma center with five data classes, including 57 data elements; and a medical emergency center, with 11 classes, including 64 data elements. Planning for the prevention of traffic accidents requires standardized data. As the foundation for crash prevention efforts, existing standard data infrastructures present policymakers and government officials with a great opportunity to strengthen and integrate existing accident information systems to better track road traffic injuries and fatalities.
Mohammadi, Ali; Ahmadi, Maryam; Gharagozlu, Alireza
2016-01-01
Background: Each year, around 1.2 million people die in the road traffic incidents. Reducing traffic accidents requires an exact understanding of the risk factors associated with traffic patterns and behaviors. Properly analyzing these factors calls for a comprehensive system for collecting and processing accident data. Objectives: The aim of this study was to develop a minimum data set (MDS) for an information management system to study traffic accidents in Iran. Materials and Methods: This descriptive, cross-sectional study was performed in 2014. Data were collected from the traffic police, trauma centers, medical emergency centers, and via the internet. The investigated resources for this study were forms, databases, and documents retrieved from the internet. Forms and databases were identical, and one sample of each was evaluated. The related internet-sourced data were evaluated in their entirety. Data were collected using three checklists. In order to arrive at a consensus about the data elements, the decision Delphi technique was applied using questionnaires. The content validity and reliability of the questionnaires were assessed by experts’ opinions and the test-retest method, respectively. Results: An (MDS) of a traffic accident information management system was assigned to three sections: a minimum data set for traffic police with six classes, including 118 data elements; a trauma center with five data classes, including 57 data elements; and a medical emergency center, with 11 classes, including 64 data elements. Conclusions: Planning for the prevention of traffic accidents requires standardized data. As the foundation for crash prevention efforts, existing standard data infrastructures present policymakers and government officials with a great opportunity to strengthen and integrate existing accident information systems to better track road traffic injuries and fatalities. PMID:27247791
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.
1991-07-01
Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.
Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab
DOT National Transportation Integrated Search
2007-01-01
An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was implemented at the Virginia Department of Transportation (VDOT) in 2002 to manage geotechnical data using a distributed Geographical Information System (G...
23 CFR 973.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS MANAGEMENT... system; (2) A process to operate and maintain the management systems and their associated databases; (3... systems shall use databases with a common or coordinated reference system that can be used to geolocate...
23 CFR 973.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS MANAGEMENT... system; (2) A process to operate and maintain the management systems and their associated databases; (3... systems shall use databases with a common or coordinated reference system that can be used to geolocate...
Implementation of remote monitoring and managing switches
NASA Astrophysics Data System (ADS)
Leng, Junmin; Fu, Guo
2010-12-01
In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.
The design, implementation, and evaluation of online credit nutrition courses: a systematic review.
Cohen, Nancy L; Carbone, Elena T; Beffa-Negrini, Patricia A
2011-01-01
To assess how postsecondary online nutrition education courses (ONEC) are delivered, determine ONEC effectiveness, identify theoretical models used, and identify future research needs. Systematic search of database literature. Postsecondary education. Nine research articles evaluating postsecondary ONEC. Knowledge/performance outcomes and student satisfaction, motivation, or perceptions. Systematic search of 922 articles and review of 9 articles meeting search criteria. Little research regarding ONEC marketing/management existed. Studies primarily evaluated introductory courses using email/websites (before 2000), or course management systems (after 2002). None used true experimental designs; just 3 addressed validity or reliability of measures or pilot-tested instruments. Three articles used theoretical models in course design; few used theories to guide evaluations. Four quasi-experimental studies indicated no differences in nutrition knowledge/performance between online and face-to-face learners. Results were inconclusive regarding student satisfaction, motivation, or perceptions. Students can gain knowledge in online as well as in face-to-face nutrition courses, but satisfaction was mixed. More up-to-date investigations on effective practices are warranted, using theories to identify factors that enhance student outcomes, addressing emerging technologies, and documenting ONEC marketing, management, and delivery. Adequate training/support for faculty is needed to improve student experiences and faculty time management. Copyright © 2011 Society for Nutrition Education. Published by Elsevier Inc. All rights reserved.
Jordan, Kelvin; Clarke, Alexandra M; Symmons, Deborah PM; Fleming, Douglas; Porcheret, Mark; Kadam, Umesh T; Croft, Peter
2007-01-01
Background Primary care consultation data are an important source of information on morbidity prevalence. It is not known how reliable such figures are. Aim To compare annual consultation prevalence estimates for musculoskeletal conditions derived from four general practice consultation databases. Design of study Retrospective study of general practice consultation records. Setting Three national general practice consultation databases: i) Fourth Morbidity Statistics from General Practice (MSGP4, 1991/92), ii) Royal College of General Practitioners Weekly Returns Service (RCGP WRS, 2001), and iii) General Practice Research Database (GPRD, 1991 and 2001); and one regional database (Consultations in Primary Care Archive, 2001). Method Age-sex standardised persons consulting annual prevalence rates for musculoskeletal conditions overall, rheumatoid arthritis, osteoarthritis and arthralgia were derived for patients aged 15 years and over. Results GPRD prevalence of any musculoskeletal condition, rheumatoid arthritis and osteoarthritis was lower than that of the other databases. This is likely to be due to GPs not needing to record every consultation made for a chronic condition. MSGP4 gave the highest prevalence for osteoarthritis but low prevalence of arthralgia which reflects encouragement for GPs to use diagnostic rather than symptom codes. Conclusion Considerable variation exists in consultation prevalence estimates for musculoskeletal conditions. Researchers and health service planners should be aware that estimates of disease occurrence based on consultation will be influenced by choice of database. This is likely to be true for other chronic diseases and where alternative symptom labels exist for a disease. RCGP WRS may give the most reliable prevalence figures for musculoskeletal and other chronic diseases. PMID:17244418
Salary Management System for Small and Medium-sized Enterprises
NASA Astrophysics Data System (ADS)
Hao, Zhang; Guangli, Xu; Yuhuan, Zhang; Yilong, Lei
Small and Medium-sized Enterprises (SMEs) in the process of wage entry, calculation, the total number are needed to be done manually in the past, the data volume is quite large, processing speed is low, and it is easy to make error, which is resulting in low efficiency. The main purpose of writing this paper is to present the basis of salary management system, establish a scientific database, the computer payroll system, using the computer instead of a lot of past manual work in order to reduce duplication of staff labor, it will improve working efficiency.This system combines the actual needs of SMEs, through in-depth study and practice of the C/S mode, PowerBuilder10.0 development tools, databases and SQL language, Completed a payroll system needs analysis, database design, application design and development work. Wages, departments, units and personnel database file are included in this system, and have data management, department management, personnel management and other functions, through the control and management of the database query, add, delete, modify, and other functions can be realized. This system is reasonable design, a more complete function, stable operation has been tested to meet the basic needs of the work.
NASA Astrophysics Data System (ADS)
Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin
2017-08-01
As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.
Clinical code set engineering for reusing EHR data for research: A review.
Williams, Richard; Kontopantelis, Evangelos; Buchan, Iain; Peek, Niels
2017-06-01
The construction of reliable, reusable clinical code sets is essential when re-using Electronic Health Record (EHR) data for research. Yet code set definitions are rarely transparent and their sharing is almost non-existent. There is a lack of methodological standards for the management (construction, sharing, revision and reuse) of clinical code sets which needs to be addressed to ensure the reliability and credibility of studies which use code sets. To review methodological literature on the management of sets of clinical codes used in research on clinical databases and to provide a list of best practice recommendations for future studies and software tools. We performed an exhaustive search for methodological papers about clinical code set engineering for re-using EHR data in research. This was supplemented with papers identified by snowball sampling. In addition, a list of e-phenotyping systems was constructed by merging references from several systematic reviews on this topic, and the processes adopted by those systems for code set management was reviewed. Thirty methodological papers were reviewed. Common approaches included: creating an initial list of synonyms for the condition of interest (n=20); making use of the hierarchical nature of coding terminologies during searching (n=23); reviewing sets with clinician input (n=20); and reusing and updating an existing code set (n=20). Several open source software tools (n=3) were discovered. There is a need for software tools that enable users to easily and quickly create, revise, extend, review and share code sets and we provide a list of recommendations for their design and implementation. Research re-using EHR data could be improved through the further development, more widespread use and routine reporting of the methods by which clinical codes were selected. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
48 CFR 52.232-33 - Payment by Electronic Funds Transfer-System for Award Management.
Code of Federal Regulations, 2013 CFR
2013-10-01
... contained in the System for Award Management (SAM) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the SAM database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the SAM database is incorrect...
48 CFR 52.232-33 - Payment by Electronic Funds Transfer-System for Award Management.
Code of Federal Regulations, 2014 CFR
2014-10-01
... contained in the System for Award Management (SAM) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the SAM database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the SAM database is incorrect...
Microcomputer-Based Access to Machine-Readable Numeric Databases.
ERIC Educational Resources Information Center
Wenzel, Patrick
1988-01-01
Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)
ERIC Educational Resources Information Center
Hoffman, Tony
Sophisticated database management systems (DBMS) for microcomputers are becoming increasingly easy to use, allowing small school districts to develop their own autonomous databases for tracking enrollment and student progress in special education. DBMS applications can be designed for maintenance by district personnel with little technical…
Code of Federal Regulations, 2014 CFR
2014-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...
A Database Design and Development Case: NanoTEK Networks
ERIC Educational Resources Information Center
Ballenger, Robert M.
2010-01-01
This case provides a real-world project-oriented case study for students enrolled in a management information systems, database management, or systems analysis and design course in which database design and development are taught. The case consists of a business scenario to provide background information and details of the unique operating…
ERIC Educational Resources Information Center
Dalrymple, Prudence W.; Roderer, Nancy K.
1994-01-01
Highlights the changes that have occurred from 1987-93 in database access systems. Topics addressed include types of databases, including CD-ROMs; enduser interface; database selection; database access management, including library instruction and use of primary literature; economic issues; database users; the search process; and improving…
NASA Astrophysics Data System (ADS)
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
2015-03-01
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
Kuhn, Stefan; Schlörer, Nils E
2015-08-01
nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.
PIQMIe: a web server for semi-quantitative proteomics data management and analysis
Kuzniar, Arnold; Kanaar, Roland
2014-01-01
We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. PMID:24861615
Dutch guidelines for physiotherapy in patients with stress urinary incontinence: an update.
Bernards, Arnold T M; Berghmans, Bary C M; Slieker-Ten Hove, Marijke C Ph; Staal, J Bart; de Bie, Rob A; Hendriks, Erik J M
2014-02-01
Stress urinary incontinence (SUI) is the most common form of incontinence impacting on quality of life (QOL) and is associated with high financial, social, and emotional costs. The purpose of this study was to provide an update existing Dutch evidence-based clinical practice guidelines (CPGs) for physiotherapy management of patients with stress urinary incontinence (SUI) in order to support physiotherapists in decision making and improving efficacy and uniformity of care. A computerized literature search of relevant databases was performed to search for information regarding etiology, prognosis, and physiotherapy assessment and management in patients with SUI. Where no evidence was available, recommendations were based on consensus. Clinical application of CPGs and feasibility were reviewed. The diagnostic process consists of systematic history taking and physical examination supported by reliable and valid assessment tools to determine physiological potential for recovery. Therapy is related to different problem categories. SUI treatment is generally based on pelvic floor muscle exercises combined with patient education and counseling. An important strategy is to reduce prevalent SUI by reducing influencing risk factors. Scientific evidence supporting assessment and management of SUI is strong. The CPGs reflect the current state of knowledge of effective and tailor-made intervention in SUI patients.
PIQMIe: a web server for semi-quantitative proteomics data management and analysis.
Kuzniar, Arnold; Kanaar, Roland
2014-07-01
We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Network Configuration of Oracle and Database Programming Using SQL
NASA Technical Reports Server (NTRS)
Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.
2000-01-01
A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).
NASA Astrophysics Data System (ADS)
Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George
2017-10-01
In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.
Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe
2015-04-01
The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos; Wallace, Sarah
1993-01-01
The Universal Index System (UIS) is an index management system that uses a uniform interface to solve the heterogeneity problem among database management systems. UIS provides an easy-to-use common interface to access all underlying data, but also allows different underlying database management systems, storage representations, and access methods.
Maintaining Research Documents with Database Management Software.
ERIC Educational Resources Information Center
Harrington, Stuart A.
1999-01-01
Discusses taking notes for research projects and organizing them into card files; reviews the literature on personal filing systems; introduces the basic process of database management; and offers a plan for managing research notes. Describes field groups and field definitions, data entry, and creating reports. (LRW)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-22
... CONSUMER PRODUCT SAFETY COMMISSION Agency Information Collection Activities; Announcement of Office of Management and Budget Approval; Publicly Available Consumer Product Safety Information Database... Product Safety Information Database has been approved by the Office of Management and Budget (OMB) under...
A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP
Chun, Seung-Man; Park, Jong-Tae
2017-01-01
Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation. PMID:28085109
A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP.
Chun, Seung-Man; Park, Jong-Tae
2017-01-12
Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation.
ERIC Educational Resources Information Center
Reardon, Sean F.; Kalogrides, Demetra; Ho, Andrew D.
2017-01-01
There is no comprehensive database of U.S. district-level test scores that is comparable across states. We describe and evaluate a method for constructing such a database. First, we estimate linear, reliability-adjusted linking transformations from state test score scales to the scale of the National Assessment of Educational Progress (NAEP). We…
A systematic review of quantitative burn wound microbiology in the management of burns patients.
Halstead, Fenella D; Lee, Kwang Chear; Kwei, Johnny; Dretzke, Janine; Oppenheim, Beryl A; Moiemen, Naiem S
2018-02-01
The early diagnosis of infection or sepsis in burns are important for patient care. Globally, a large number of burn centres advocate quantitative cultures of wound biopsies for patient management, since there is assumed to be a direct link between the bioburden of a burn wound and the risk of microbial invasion. Given the conflicting study findings in this area, a systematic review was warranted. Bibliographic databases were searched with no language restrictions to August 2015. Study selection, data extraction and risk of bias assessment were performed in duplicate using pre-defined criteria. Substantial heterogeneity precluded quantitative synthesis, and findings were described narratively, sub-grouped by clinical question. Twenty six laboratory and/or clinical studies were included. Substantial heterogeneity hampered comparisons across studies and interpretation of findings. Limited evidence suggests that (i) more than one quantitative microbiology sample is required to obtain reliable estimates of bacterial load; (ii) biopsies are more sensitive than swabs in diagnosing or predicting sepsis; (iii) high bacterial loads may predict worse clinical outcomes, and (iv) both quantitative and semi-quantitative culture reports need to be interpreted with caution and in the context of other clinical risk factors. The evidence base for the utility and reliability of quantitative microbiology for diagnosing or predicting clinical outcomes in burns patients is limited and often poorly reported. Consequently future research is warranted. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Herpes zoster surveillance using electronic databases in the Valencian Community (Spain)
2013-01-01
Background Epidemiologic data of Herpes Zoster (HZ) disease in Spain are scarce. The objective of this study was to assess the epidemiology of HZ in the Valencian Community (Spain), using outpatient and hospital electronic health databases. Methods Data from 2007 to 2010 was collected from computerized health databases of a population of around 5 million inhabitants. Diagnoses were recorded by physicians using the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). A sample of medical records under different criteria was reviewed by a general practitioner, to assess the reliability of codification. Results The average annual incidence of HZ was 4.60 per 1000 persons-year (PY) for all ages (95% CI: 4.57-4.63), is more frequent in women [5.32/1000PY (95% CI: 5.28-5.37)] and is strongly age-related, with a peak incidence at 70-79 years. A total of 7.16/1000 cases of HZ required hospitalization. Conclusions Electronic health database used in the Valencian Community is a reliable electronic surveillance tool for HZ disease and will be useful to define trends in disease burden before and after HZ vaccine introduction. PMID:24094135
Barnes, Brian B.; Wilson, Michael B.; Carr, Peter W.; Vitha, Mark F.; Broeckling, Corey D.; Heuberger, Adam L.; Prenni, Jessica; Janis, Gregory C.; Corcoran, Henry; Snow, Nicholas H.; Chopra, Shilpi; Dhandapani, Ramkumar; Tawfall, Amanda; Sumner, Lloyd W.; Boswell, Paul G.
2014-01-01
Gas chromatography-mass spectrometry (GC-MS) is a primary tool used to identify compounds in complex samples. Both mass spectra and GC retention times are matched to those of standards, but it is often impractical to have standards on hand for every compound of interest, so we must rely on shared databases of MS data and GC retention information. Unfortunately, retention databases (e.g. linear retention index libraries) are experimentally restrictive, notoriously unreliable, and strongly instrument dependent, relegating GC retention information to a minor, often negligible role in compound identification despite its potential power. A new methodology called “retention projection” has great potential to overcome the limitations of shared chromatographic databases. In this work, we tested the reliability of the methodology in five independent laboratories. We found that even when each lab ran nominally the same method, the methodology was 3-fold more accurate than retention indexing because it properly accounted for unintentional differences between the GC-MS systems. When the labs used different methods of their own choosing, retention projections were 4- to 165-fold more accurate. More importantly, the distribution of error in the retention projections was predictable across different methods and labs, thus enabling automatic calculation of retention time tolerance windows. Tolerance windows at 99% confidence were generally narrower than those widely used even when physical standards are on hand to measure their retention. With its high accuracy and reliability, the new retention projection methodology makes GC retention a reliable, precise tool for compound identification, even when standards are not available to the user. PMID:24205931
Davis, Chester L; Pierce, John R; Henderson, William; Spencer, C David; Tyler, Christine; Langberg, Robert; Swafford, Jennan; Felan, Gladys S; Kearns, Martha A; Booker, Brigitte
2007-04-01
The Office of the Medical Inspector of the Department of Veterans Affairs (VA) studied the reliability of data collected by the VA's National Surgical Quality Improvement Program (NSQIP). The study focused on case selection bias, accuracy of reports on patients who died, and interrater reliability measurements of patient risk variables and outcomes. Surgical data from a sample of 15 VA medical centers were analyzed. For case selection bias, reviewers applied NSQIP criteria to include or exclude 2,460 patients from the database, comparing their results with those of NSQIP staff. For accurate reporting of patients who died, reviewers compared Social Security numbers of 10,444 NSQIP records with those found in the VA Beneficiary Identification and Records Locator Subsystem, VA Patient Treatment Files, and Social Security Administration death files. For measurement of interrater reliability, reviewers reabstracted 59 variables in each of 550 patient medical records that also were recorded in the NSQIP database. On case selection bias, the reviewers agreed with NSQIP decisions on 2,418 (98%) of the 2,460 cases. Computer record matching identified 4 more deaths than the NSQIP total of 198, a difference of about 2%. For 52 of the categorical variables, agreement, uncorrected for chance, was 96%. For 48 of 52 categorical variables, kappas ranged from 0.61 to 1.0 (substantial to almost perfect agreement); none of the variables had kappas of less than 0.20 (slight to poor agreement). This sample of medical centers shows adherence to criteria in selecting cases for the NSQIP database, for reporting deaths, and for collecting patient risk variables.
NASA Astrophysics Data System (ADS)
Zhou, Hui
It is the inevitable outcome of higher education reform to carry out office and departmental target responsibility system, in which statistical processing of student's information is an important part of student's performance review. On the basis of the analysis of the student's evaluation, the student information management database application system is designed by using relational database management system software in this paper. In order to implement the function of student information management, the functional requirement, overall structure, data sheets and fields, data sheet Association and software codes are designed in details.
NASA Astrophysics Data System (ADS)
Michel-Sendis, Franco; Martinez-González, Jesus; Gauld, Ian
2017-09-01
SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF) samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA). The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.
Development of the ageing management database of PUSPATI TRIGA reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramli, Nurhayati, E-mail: nurhayati@nm.gov.my; Tom, Phongsakorn Prak; Husain, Nurfazila
Since its first criticality in 1982, PUSPATI TRIGA Reactor (RTP) has been operated for more than 30 years. As RTP become older, ageing problems have been seen to be the prominent issues. In addressing the ageing issues, an Ageing Management (AgeM) database for managing related ageing matters was systematically developed. This paper presents the development of AgeM database taking into account all RTP major Systems, Structures and Components (SSCs) and ageing mechanism of these SSCs through the system surveillance program.
1985-12-01
RELATIONAL TO NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM TH ESI S .L Kevin H. Mahoney -- Captain, USAF AFIT/GCS/ENG/85D-7...NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM - THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Systems - Kevin H. Mahoney
Tautomerism in chemical information management systems
NASA Astrophysics Data System (ADS)
Warr, Wendy A.
2010-06-01
Tautomerism has an impact on many of the processes in chemical information management systems including novelty checking during registration into chemical structure databases; storage of structures; exact and substructure searching in chemical structure databases; and depiction of structures retrieved by a search. The approaches taken by 27 different software vendors and database producers are compared. It is hoped that this comparison will act as a discussion document that could ultimately improve databases and software for researchers in the future.
Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan
2010-01-01
To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.
Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.
2011-01-01
Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.
Common Database Interface for Heterogeneous Software Engineering Tools.
1987-12-01
SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager
NASA Astrophysics Data System (ADS)
Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan
2010-10-01
The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
Schurr, K.M.; Cox, S.E.
1994-01-01
The Pesticide-Application Data-Base Management System was created as a demonstration project and was tested with data submitted to the Washington State Department of Agriculture by pesticide applicators from a small geographic area. These data were entered into the Department's relational data-base system and uploaded into the system's ARC/INFO files. Locations for pesticide applica- tions are assigned within the Public Land Survey System grids, and ARC/INFO programs in the Pesticide-Application Data-Base Management System can subdivide each survey section into sixteen idealized quarter-quarter sections for display map grids. The system provides data retrieval and geographic information system plotting capabilities from a menu of seven basic retrieval options. Additionally, ARC/INFO coverages can be created from the retrieved data when required for particular applications. The Pesticide-Application Data-Base Management System, or the general principles used in the system, could be adapted to other applica- tions or to other states.
NASA Astrophysics Data System (ADS)
Kutzleb, C. D.
1997-02-01
The high incidence of recidivism (repeat offenders) in the criminal population makes the use of the IAFIS III/FBI criminal database an important tool in law enforcement. The problems and solutions employed by IAFIS III/FBI criminal subject searches are discussed for the following topics: (1) subject search selectivity and reliability; (2) the difficulty and limitations of identifying subjects whose anonymity may be a prime objective; (3) database size, search workload, and search response time; (4) techniques and advantages of normalizing the variability in an individual's name and identifying features into identifiable and discrete categories; and (5) the use of database demographics to estimate the likelihood of a match between a search subject and database subjects.
Study of complete interconnect reliability for a GaAs MMIC power amplifier
NASA Astrophysics Data System (ADS)
Lin, Qian; Wu, Haifeng; Chen, Shan-ji; Jia, Guoqing; Jiang, Wei; Chen, Chao
2018-05-01
By combining the finite element analysis (FEA) and artificial neural network (ANN) technique, the complete prediction of interconnect reliability for a monolithic microwave integrated circuit (MMIC) power amplifier (PA) at the both of direct current (DC) and alternating current (AC) operation conditions is achieved effectively in this article. As a example, a MMIC PA is modelled to study the electromigration failure of interconnect. This is the first time to study the interconnect reliability for an MMIC PA at the conditions of DC and AC operation simultaneously. By training the data from FEA, a high accuracy ANN model for PA reliability is constructed. Then, basing on the reliability database which is obtained from the ANN model, it can give important guidance for improving the reliability design for IC.
Considerations to improve functional annotations in biological databases.
Benítez-Páez, Alfonso
2009-12-01
Despite the great effort to design efficient systems allowing the electronic indexation of information concerning genes, proteins, structures, and interactions published daily in scientific journals, some problems are still observed in specific tasks such as functional annotation. The annotation of function is a critical issue for bioinformatic routines, such as for instance, in functional genomics and the further prediction of unknown protein function, which are highly dependent of the quality of existing annotations. Some information management systems evolve to efficiently incorporate information from large-scale projects, but often, annotation of single records from the literature is difficult and slow. In this short report, functional characterizations of a representative sample of the entire set of uncharacterized proteins from Escherichia coli K12 was compiled from Swiss-Prot, PubMed, and EcoCyc and demonstrate a functional annotation deficit in biological databases. Some issues are postulated as causes of the lack of annotation, and different solutions are evaluated and proposed to avoid them. The hope is that as a consequence of these observations, there will be new impetus to improve the speed and quality of functional annotation and ultimately provide updated, reliable information to the scientific community.
Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.
Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel
2012-01-01
Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012
Kingfisher: a system for remote sensing image database management
NASA Astrophysics Data System (ADS)
Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.
2003-04-01
At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.
Ingersoll, Christopher G.; Haverland, Pamela S.; Brunson, Eric L.; Canfield, Timothy J.; Dwyer, F. James; Henke, Chris; Kemble, Nile E.; Mount, David R.; Fox, Richard G.
1996-01-01
Procedures are described for calculating and evaluating sediment effect concentrations (SECs) using laboratory data on the toxicity of contaminants associated with field-collected sediment to the amphipod Hyalella azteca and the midge Chironomus riparius. SECs are defined as the concentrations of individual contaminants in sediment below which toxicity is rarely observed and above which toxicity is frequently observed. The objective of the present study was to develop SECs to classify toxicity data for Great Lake sediment samples tested with Hyalella azteca and Chironomus riparius. This SEC database included samples from additional sites across the United States in order to make the database as robust as possible. Three types of SECs were calculated from these data: (1) Effect Range Low (ERL) and Effect Range Median (ERM), (2) Threshold Effect Level (TEL) and Probable Effect Level (PEL), and (3) No Effect Concentration (NEC). We were able to calculate SECs primarily for total metals, simultaneously extracted metals, polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons (PAHs). The ranges of concentrations in sediment were too narrow in our database to adequately evaluate SECs for butyltins, methyl mercury, polychlorinated dioxins and furans, or chlorinated pesticides. About 60 to 80% of the sediment samples in the database are correctly classified as toxic or not toxic depending on type of SEC evaluated. ERMs and ERLs are generally as reliable as paired PELs and TELs at classifying both toxic and non-toxic samples in our database. Reliability of the SECs in terms of correctly classifying sediment samples is similar between ERMs and NECs; however, ERMs minimize Type I error (false positives) relative to ERLs and minimize Type II error (false negatives) relative to NECs. Correct classification of samples can be improved by using only the most reliable individual SECs for chemicals (i.e., those with a higher percentage of correct classification). SECs calculated using sediment concentrations normalized to total organic carbon (TOC) concentrations did not improve the reliability compared to SECs calculated using dry-weight concentrations. The range of TOC concentrations in our database was relatively narrow compared to the ranges of contaminant concentrations. Therefore, normalizing dry-weight concentrations to a relatively narrow range of TOC concentrations had little influence on relative concentra of contaminants among samples. When SECs are used to conduct a preliminary screening to predict the potential for toxicity in the absence of actual toxicity testing, a low number of SEC exceedances should be used to minimize the potential for false negatives; however, the risk of accepting higher false positives is increased.
The ID Database: Managing the Instructional Development Process
ERIC Educational Resources Information Center
Piña, Anthony A.; Sanford, Barry K.
2017-01-01
Management is evolving as a foundational domain to the field of instructional design and technology. However, there are few tools dedicated to the management of instructional design and development projects and activities. In this article, we describe the development, features and implementation of an instructional design database--built from a…
Expansion of the MANAGE database with forest and drainage studies
USDA-ARS?s Scientific Manuscript database
The “Measured Annual Nutrient loads from AGricultural Environments” (MANAGE) database was published in 2006 to expand an early 1980’s compilation of nutrient export (load) data from agricultural land uses at the field or farm spatial scale. Then in 2008, MANAGE was updated with 15 additional studie...
76 FR 12617 - Airworthiness Directives; The Boeing Company Model 777-200 and -300 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... installing new operational software for the electrical load management system and configuration database... the electrical load management system operational software and configuration database software, in... Management, P.O. Box 3707, MC 2H-65, Seattle, Washington 98124-2207; telephone 206- 544-5000, extension 1...
Mobile agent application and integration in electronic anamnesis system.
Liu, Chia-Hui; Chung, Yu-Fang; Chen, Tzer-Shyong; Wang, Sheng-De
2012-06-01
Electronic anamnesis is to transform ordinary paper trails to digitally formatted health records, which include the patient's general information, health status, and follow-ups on chronic diseases. Its main purpose is to let the records could be stored for a longer period of time and could be shared easily across departments and hospitals. Which means hospital management could use less resource on maintaining ever-growing database and reduce redundancy, so less money would be spent for managing the health records. In the foreseeable future, building up a comprehensive and integrated medical information system is a must, because it is critical to hospital resource integration and quality improvement. If mobile agent technology is adopted in the electronic anamnesis system, it would help the hospitals to make the medical practices more efficiently and conveniently. Nonetheless, most of the hospitals today are still using paper-based health records to manage the medical information. The reason why the institutions continue using traditional practices to manage the records is because there is no well-trusted and reliable electronic anamnesis system existing and accepted by both institutions and patients. The threat of privacy invasion is one of the biggest concerns when the topic of electronic anamnesis is brought up, because the security threats drag us back from using such a system. So, the medical service quality is difficult to be improved substantially. In this case, we have come up a theory to remove such security threats and make electronic anamnesis more appealing for use. Our theory is to integrate the mobile agent technology with the backbone of electronic anamnesis to construct a hierarchical access control system to retrieve the corresponding information based upon the permission classes. The system would create a classification for permission among the users inside the medical institution. Under this framework, permission control center would distribute an access key to each user, so they would only allow using the key to access information correspondingly. In order to verify the reliability of the proposed system framework, we have also conducted a security analysis to list all the possible security threats that may harm the system and to prove the system is reliable and safe. If the system is adopted, the doctors would be able to quickly access the information while performing medical examinations. Hence, the efficiency and quality of healthcare service would be greatly improved.
Solving Relational Database Problems with ORDBMS in an Advanced Database Course
ERIC Educational Resources Information Center
Wang, Ming
2011-01-01
This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…
Migration from relational to NoSQL database
NASA Astrophysics Data System (ADS)
Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar
2017-11-01
Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.
Morris, Marie C; Gallagher, Tom K; Ridgway, Paul F
2012-01-01
The objective was to systematically review the literature to identify and grade tools used for the end point assessment of procedural skills (e.g., phlebotomy, IV cannulation, suturing) competence in medical students prior to certification. The authors searched eight bibliographic databases electronically - ERIC, Medline, CINAHL, EMBASE, Psychinfo, PsychLIT, EBM Reviews and the Cochrane databases. Two reviewers independently reviewed the literature to identify procedural assessment tools used specifically for assessing medical students within the PRISMA framework, the inclusion/exclusion criteria and search period. Papers on OSATS and DOPS were excluded as they focused on post-registration assessment and clinical rather than simulated competence. Of 659 abstracted articles 56 identified procedural assessment tools. Only 11 specifically assessed medical students. The final 11 studies consisted of 1 randomised controlled trial, 4 comparative and 6 descriptive studies yielding 12 heterogeneous procedural assessment tools for analysis. Seven tools addressed four discrete pre-certification skills, basic suture (3), airway management (2), nasogastric tube insertion (1) and intravenous cannulation (1). One tool used a generic assessment of procedural skills. Two tools focused on postgraduate laparoscopic skills and one on osteopathic students and thus were not included in this review. The levels of evidence are low with regard to reliability - κ = 0.65-0.71 and minimum validity is achieved - face and content. In conclusion, there are no tools designed specifically to assess competence of procedural skills in a final certification examination. There is a need to develop standardised tools with proven reliability and validity for assessment of procedural skills competence at the end of medical training. Medicine graduates must have comparable levels of procedural skills acquisition entering the clinical workforce irrespective of the country of training.
... Disasters and Public Health Emergencies The NLM Disaster Information Management Research Center has tools, guides, and databases to ... Disasters and Public Health Emergencies The NLM Disaster Information Management Research Center has tools, guides, and databases to ...
Construction of In-house Databases in a Corporation
NASA Astrophysics Data System (ADS)
Dezaki, Kyoko; Saeki, Makoto
Rapid progress in advanced informationalization has increased need to enforce documentation activities in industries. Responding to it Tokin Corporation has been engaged in database construction for patent information, technical reports and so on accumulated inside the Company. Two results are obtained; One is TOPICS, inhouse patent information management system, the other is TOMATIS, management and technical information system by use of personal computers and all-purposed relational database software. These systems aim at compiling databases of patent and technological management information generated internally and externally by low labor efforts as well as low cost, and providing for comprehensive information company-wide. This paper introduces the outline of these systems and how they are actually used.
Ahmadi, Farshid Farnood; Ebadi, Hamid
2009-01-01
3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.
Personal Database Management System I TRIAS
NASA Astrophysics Data System (ADS)
Yamamoto, Yoneo; Kashihara, Akihiro; Kawagishi, Keisuke
The current paper provides TRIAS (TRIple Associative System) which is a database management system for a personal use. In order to implement TRIAS, we have developed an associative database, whose format is (e,a,v) : e for entity, a for attribute, v for value. ML-TREE is used to construct (e,a,v). ML-TREE is a reversion of B+-tree that is multiway valanced tree. The paper focuses mainly on the usage of associative database, demonstrating how to use basic commands, primary functions and applcations.
The ADAMS interactive interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietscha, E.R.
1990-12-17
The ADAMS (Advanced DAta Management System) project is exploring next generation database technology. Database management does not follow the usual programming paradigm. Instead, the database dictionary provides an additional name space environment that should be interactively created and tested before writing application code. This document describes the implementation and operation of the ADAMS Interpreter, an interactive interface to the ADAMS data dictionary and runtime system. The Interpreter executes individual statements of the ADAMS Interface Language, providing a fast, interactive mechanism to define and access persistent databases. 5 refs.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
Rosset, Saharon; Aharoni, Ehud; Neuvirth, Hani
2014-07-01
Issues of publication bias, lack of replicability, and false discovery have long plagued the genetics community. Proper utilization of public and shared data resources presents an opportunity to ameliorate these problems. We present an approach to public database management that we term Quality Preserving Database (QPD). It enables perpetual use of the database for testing statistical hypotheses while controlling false discovery and avoiding publication bias on the one hand, and maintaining testing power on the other hand. We demonstrate it on a use case of a replication server for GWAS findings, underlining its practical utility. We argue that a shift to using QPD in managing current and future biological databases will significantly enhance the community's ability to make efficient and statistically sound use of the available data resources. © 2014 WILEY PERIODICALS, INC.
A new framing approach in guideline development to manage different sources of knowledge.
Lukersmith, Sue; Hopman, Katherine; Vine, Kristina; Krahe, Lee; McColl, Alexander
2017-02-01
Contemporary guideline methodology struggles to consider context and information from different sources of knowledge besides quantitative research. Return to work programmes involve multiple components and stakeholders. If the guideline is to be relevant and practical for a complex intervention such as return to work, it is essential to use broad sources of knowledge. This paper reports on a new method in guideline development to manage different sources of knowledge. The method used framing for the return-to-work guidance within the Clinical Practice Guidelines for the Management of Rotator Cuff Syndrome in the Workplace. The development involved was a multi-disciplinary working party of experts including consumers. The researchers considered a broad range of research, expert (practice and experience) knowledge, the individual's and workplace contexts, and used framing with the International Classification of Functioning, Disability and Health. Following a systematic database search on four clinical questions, there were seven stages of knowledge management to extract, unpack, map and pack information to the ICF domains framework. Companion graded recommendations were developed. The results include practical examples, user and consumer guides, flow charts and six graded or consensus recommendations on best practice for return to work intervention. Our findings suggest using framing in guideline methodology with internationally accepted frames such as the ICF is a reliable and transparent framework to manage different sources of knowledge. Future research might examine other examples and methods for managing complexity and using different sources of knowledge in guideline development. © 2016 John Wiley & Sons, Ltd.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng
2017-05-10
Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .
A case study for a digital seabed database: Bohai Sea engineering geology database
NASA Astrophysics Data System (ADS)
Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang
2006-07-01
This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.
Geoscience research databases for coastal Alabama ecosystem management
Hummell, Richard L.
1995-01-01
Effective management of complex coastal ecosystems necessitates access to scientific knowledge that can be acquired through a multidisciplinary approach involving Federal and State scientists that take advantage of agency expertise and resources for the benefit of all participants working toward a set of common research and management goals. Cooperative geostatic investigations have led toward building databases of fundamental scientific knowledge that can be utilized to manage coastal Alabama's natural and future development. These databases have been used to assess the occurrence and economic potential of hard mineral resources in the Alabama EFZ, and to support oil spill contingency planning and environmental analysis for coastal Alabama.
Using Virtual Servers to Teach the Implementation of Enterprise-Level DBMSs: A Teaching Note
ERIC Educational Resources Information Center
Wagner, William P.; Pant, Vik
2010-01-01
One of the areas where demand has remained strong for MIS students is in the area of database management. Since the early days, this topic has been a mainstay in the MIS curriculum. Students of database management today typically learn about relational databases, SQL, normalization, and how to design and implement various kinds of database…
Towards G2G: Systems of Technology Database Systems
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David
2005-01-01
We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H
2017-05-01
A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Adopting a corporate perspective on databases. Improving support for research and decision making.
Meistrell, M; Schlehuber, C
1996-03-01
The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.
Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System
Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail
1988-01-01
This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.
Assessment of the psychometric properties of the Family Management Measure.
Knafl, Kathleen; Deatrick, Janet A; Gallo, Agatha; Dixon, Jane; Grey, Margaret; Knafl, George; O'Malley, Jean
2011-06-01
This paper reports development of the Family Management Measure (FaMM) of parental perceptions of family management of chronic conditions. By telephone interview, 579 parents of children age 3 to 19 with a chronic condition (349 partnered mothers, 165 partners, 65 single mothers) completed the FaMM and measures of child functional status and behavioral problems and family functioning. Analyses addressed reliability, factor structure, and construct validity. Exploratory factor analysis yielded six scales: Child's Daily Life, Condition Management Ability, Condition Management Effort, Family Life Difficulty, Parental Mutuality, and View of Condition Impact. Internal consistency reliability ranged from .72 to .91, and test-retest reliability from .71 to .94. Construct validity was supported by significant correlations in hypothesized directions between FaMM scales and established measures. Results support FaMM's; reliability and validity, indicating it performs in a theoretically meaningful way and taps distinct aspects of family response to childhood chronic conditions.
Vlach, Jiří; Junková, Petra; Karamonová, Ludmila; Blažková, Martina; Fukal, Ladislav
2017-01-01
ABSTRACT In the last decade, strains of the genera Franconibacter and Siccibacter have been misclassified as first Enterobacter and later Cronobacter. Because Cronobacter is a serious foodborne pathogen that affects premature neonates and elderly individuals, such misidentification may not only falsify epidemiological statistics but also lead to tests of powdered infant formula or other foods giving false results. Currently, the main ways of identifying Franconibacter and Siccibacter strains are by biochemical testing or by sequencing of the fusA gene as part of Cronobacter multilocus sequence typing (MLST), but in relation to these strains the former is generally highly difficult and unreliable while the latter remains expensive. To address this, we developed a fast, simple, and most importantly, reliable method for Franconibacter and Siccibacter identification based on intact-cell matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS). Our method integrates the following steps: data preprocessing using mMass software; principal-component analysis (PCA) for the selection of mass spectrum fingerprints of Franconibacter and Siccibacter strains; optimization of the Biotyper database settings for the creation of main spectrum projections (MSPs). This methodology enabled us to create an in-house MALDI MS database that extends the current MALDI Biotyper database by including Franconibacter and Siccibacter strains. Finally, we verified our approach using seven previously unclassified strains, all of which were correctly identified, thereby validating our method. IMPORTANCE We show that the majority of methods currently used for the identification of Franconibacter and Siccibacter bacteria are not able to properly distinguish these strains from those of Cronobacter. While sequencing of the fusA gene as part of Cronobacter MLST remains the most reliable such method, it is highly expensive and time-consuming. Here, we demonstrate a cost-effective and reliable alternative that correctly distinguishes between Franconibacter, Siccibacter, and Cronobacter bacteria and identifies Franconibacter and Siccibacter at the species level. Using intact-cell MALDI-TOF MS, we extend the current MALDI Biotyper database with 11 Franconibacter and Siccibacter MSPs. In addition, the use of our approach is likely to lead to a more reliable identification scheme for Franconibacter and Siccibacter strains and, consequently, a more trustworthy epidemiological picture of their involvement in disease. PMID:28455327
Data base management system for lymphatic filariasis--a neglected tropical disease.
Upadhyayula, Suryanaryana Murty; Mutheneni, Srinivasa Rao; Kadiri, Madhusudhan Rao; Kumaraswamy, Sriram; Nelaturu, Sarat Chandra Babu
2012-01-01
Researchers working in the area of Public Health are being confronted with large volumes of data on various aspects of entomology and epidemiology. To obtain the relevant information out of these data requires particular database management system. In this paper, we have described about the usages of our developed database on lymphatic filariasis. This database application is developed using Model View Controller (MVC) architecture, with MySQL as database and a web based interface. We have collected and incorporated the data on filariasis in the database from Karimnagar, Chittoor, East and West Godavari districts of Andhra Pradesh, India. The importance of this database is to store the collected data, retrieve the information and produce various combinational reports on filarial aspects which in turn will help the public health officials to understand the burden of disease in a particular locality. This information is likely to have an imperative role on decision making for effective control of filarial disease and integrated vector management operations.
Owens, John
2009-01-01
Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.
Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei
2011-07-01
This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
... titled, ``Department of Homeland Security/Federal Emergency Management Agency--006 Citizen Corps Database...) authorities; (5) purpose; (6) routine uses of information; (7) system manager and address; (8) notification... Database'' and retitle it ``DHS/FEMA--006 Citizen Corps Program System of Records.'' FEMA administers the...
NREL: U.S. Life Cycle Inventory Database - Project Management Team
Project Management Team Information about the U.S. Life Cycle Inventory (LCI) Database project management team is listed on this page. Additional project information is available about the U.S. LCI Mechanical Engineering, Colorado State University Professional History Michael has worked as a Senior
Development of a Relational Database for Learning Management Systems
ERIC Educational Resources Information Center
Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul
2011-01-01
In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…
An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.
ERIC Educational Resources Information Center
Chen, I-Min A.; Markowitz, Victor M.
1995-01-01
Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…
Graphical user interfaces for symbol-oriented database visualization and interaction
NASA Astrophysics Data System (ADS)
Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger
1997-04-01
In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.
NASA Technical Reports Server (NTRS)
Hu, Chaumin
2007-01-01
IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.
Geodecision system for traceability and sustainable production of beef cattle in Brazil
NASA Astrophysics Data System (ADS)
Victoria, D. D.; Andrade, R. G.; Bolfe, L.; Batistella, M.; Pires, P. P.; Vicente, L. E.; Visoli, M. C.
2011-12-01
Beef cattle production sustainability depends on incorporating innovative tools and technologies which are easy to comprehend, economically viable, and spatially explicit into the registration of precise, reliable data about production practices. This research developed from the needs and demands of food safety and food quality in extensive beef cattle production within the scope of the policies of Southern Cone and European Union's countries. Initially, the OTAG project (Operational Management and Geodecisional Prototype to Track and Trace Agricultural Production) focused on the development of a prototype traceability of cattle. The aim for the project's next phase is to enhance the electronic devices used in the identification and positioning of the animals, and the incorporation of more management and sanitary information. Besides, we intend to structure a database that enables the inclusion of greater amount of geospatial information linked to environmental aspects, such as water deficit, vegetation vigour, degradation indices of pasture areas, among others. For the extraction of knowledge, and the presentation of the results, we propose the development of a friendly interface to facilitate the exploration of the textual, tabular and geospatial information useful for the user.
Privacy information management for video surveillance
NASA Astrophysics Data System (ADS)
Luo, Ying; Cheung, Sen-ching S.
2013-05-01
The widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been proposed to automatically redact images of trusted individuals in the surveillance video. To identify these individuals for protection, the most reliable approach is to use biometric signals such as iris patterns as they are immutable and highly discriminative. In this paper, we propose a privacy data management system to be used in a privacy-aware video surveillance system. The privacy status of a subject is anonymously determined based on her iris pattern. For a trusted subject, the surveillance video is redacted and the original imagery is considered to be the privacy information. Our proposed system allows a subject to access her privacy information via the same biometric signal for privacy status determination. Two secure protocols, one for privacy information encryption and the other for privacy information retrieval are proposed. Error control coding is used to cope with the variability in iris patterns and efficient implementation is achieved using surrogate data records. Experimental results on a public iris biometric database demonstrate the validity of our framework.
Czauderna, Piotr; Haeberle, Beate; Hiyama, Eiso; Rangaswami, Arun; Krailo, Mark; Maibach, Rudolf; Rinaldi, Eugenia; Feng, Yurong; Aronson, Daniel; Malogolowkin, Marcio; Yoshimura, Kenichi; Leuschner, Ivo; Lopez-Terrada, Dolores; Hishiki, Tomoro; Perilongo, Giorgio; von Schweinitz, Dietrich; Schmid, Irene; Watanabe, Kenichiro; Derosa, Marisa; Meyers, Rebecka
2016-01-01
Introduction Contemporary state-of-the-art management of cancer is increasingly defined by individualized treatment strategies. For very rare tumors, like hepatoblastoma, the development of biologic markers, and the identification of reliable prognostic risk factors for tailoring treatment, remains very challenging. The Children's Hepatic tumors International Collaboration (CHIC) is a novel international response to this challenge. Methods Four multicenter trial groups in the world, who have performed prospective controlled studies of hepatoblastoma over the past two decades (COG; SIOPEL; GPOH; and JPLT), joined forces to form the CHIC consortium. With the support of the data management group CINECA, CHIC developed a centralized online platform where data from eight completed hepatoblastoma trials were merged to form a database of 1605 hepatoblastoma cases treated between 1988 and 2008. The resulting dataset is described and the relationships between selected patient and tumor characteristics, and risk for adverse disease outcome (event-free survival; EFS) are examined. Results Significantly increased risk for EFS-event was noted for advanced PRETEXT group, macrovascular venous or portal involvement, contiguous extrahepatic disease, primary tumor multifocality and tumor rupture at enrollment. Higher age (≥8 years), low AFP (<100 ng/ml) and metastatic disease were associated with the worst outcome. Conclusion We have identified novel prognostic factors for hepatoblastoma, as well as confirmed established factors, that will be used to develop a future common global risk stratification system. The mechanics of developing the globally accessible web-based portal, building and refining the database, and performing this first statistical analysis has laid the foundation for future collaborative efforts. This is an important step for refining of the risk based grouping and approach to future treatment stratification, thus we think our collaboration offers a template for others to follow in the study of rare tumors and diseases. PMID:26655560
Czauderna, Piotr; Haeberle, Beate; Hiyama, Eiso; Rangaswami, Arun; Krailo, Mark; Maibach, Rudolf; Rinaldi, Eugenia; Feng, Yurong; Aronson, Daniel; Malogolowkin, Marcio; Yoshimura, Kenichi; Leuschner, Ivo; Lopez-Terrada, Dolores; Hishiki, Tomoro; Perilongo, Giorgio; von Schweinitz, Dietrich; Schmid, Irene; Watanabe, Kenichiro; Derosa, Marisa; Meyers, Rebecka
2016-01-01
Contemporary state-of-the-art management of cancer is increasingly defined by individualized treatment strategies. For very rare tumors, like hepatoblastoma, the development of biologic markers, and the identification of reliable prognostic risk factors for tailoring treatment, remains very challenging. The Children's Hepatic tumors International Collaboration (CHIC) is a novel international response to this challenge. Four multicenter trial groups in the world, who have performed prospective controlled studies of hepatoblastoma over the past two decades (COG; SIOPEL; GPOH; and JPLT), joined forces to form the CHIC consortium. With the support of the data management group CINECA, CHIC developed a centralized online platform where data from eight completed hepatoblastoma trials were merged to form a database of 1605 hepatoblastoma cases treated between 1988 and 2008. The resulting dataset is described and the relationships between selected patient and tumor characteristics, and risk for adverse disease outcome (event-free survival; EFS) are examined. Significantly increased risk for EFS-event was noted for advanced PRETEXT group, macrovascular venous or portal involvement, contiguous extrahepatic disease, primary tumor multifocality and tumor rupture at enrollment. Higher age (≥ 8 years), low AFP (<100 ng/ml) and metastatic disease were associated with the worst outcome. We have identified novel prognostic factors for hepatoblastoma, as well as confirmed established factors, that will be used to develop a future common global risk stratification system. The mechanics of developing the globally accessible web-based portal, building and refining the database, and performing this first statistical analysis has laid the foundation for future collaborative efforts. This is an important step for refining of the risk based grouping and approach to future treatment stratification, thus we think our collaboration offers a template for others to follow in the study of rare tumors and diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Blair, John C., Jr.
1982-01-01
Outlines the important factors to be considered in selecting a database management system for use with a microcomputer and presents a series of guidelines for developing a database. General procedures, report generation, data manipulation, information storage, word processing, data entry, database indexes, and relational databases are among the…
A Summary of the Naval Postgraduate School Research Program
1989-08-30
5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database
Wachtel, Ruth E; Dexter, Franklin
2013-12-01
The purpose of this article is to teach operating room managers, financial analysts, and those with a limited knowledge of search engines, including PubMed, how to locate articles they need in the areas of operating room and anesthesia group management. Many physicians are unaware of current literature in their field and evidence-based practices. The most common source of information is colleagues. Many people making management decisions do not read published scientific articles. Databases such as PubMed are available to search for such articles. Other databases, such as citation indices and Google Scholar, can be used to uncover additional articles. Nevertheless, most people who do not know how to use these databases are reluctant to utilize help resources when they do not know how to accomplish a task. Most people are especially reluctant to use on-line help files. Help files and search databases are often difficult to use because they have been designed for users already familiar with the field. The help files and databases have specialized vocabularies unique to the application. MeSH terms in PubMed are not useful alternatives for operating room management, an important limitation, because MeSH is the default when search terms are entered in PubMed. Librarians or those trained in informatics can be valuable assets for searching unusual databases, but they must possess the domain knowledge relative to the subject they are searching. The search methods we review are especially important when the subject area (e.g., anesthesia group management) is so specific that only 1 or 2 articles address the topic of interest. The materials are presented broadly enough that the reader can extrapolate the findings to other areas of clinical and management issues in anesthesiology.
Wenner, Joshua B; Norena, Monica; Khan, Nadia; Palepu, Anita; Ayas, Najib T; Wong, Hubert; Dodek, Peter M
2009-09-01
Although reliability of severity of illness and predicted probability of hospital mortality have been assessed, interrater reliability of the abstraction of primary and other intensive care unit (ICU) admitting diagnoses and underlying comorbidities has not been studied. Patient data from one ICU were originally abstracted and entered into an electronic database by an ICU nurse. A research assistant reabstracted patient demographics, ICU admitting diagnoses and underlying comorbidities, and elements of Acute Physiology and Chronic Health Evaluation II (APACHE II) score from 100 random patients of 474 admitted during 2005 using an identical electronic database. Chamberlain's percent positive agreement was used to compare diagnoses and comorbidities between the 2 data abstractors. A kappa statistic was calculated for demographic variables, Glasgow Coma Score, APACHE II chronic health points, and HIV status. Intraclass correlation was calculated for acute physiology points and predicted probability of hospital mortality. Percent positive agreement for ICU primary and other admitting diagnoses ranged from 0% (primary brain injury) to 71% (sepsis), and for underlying comorbidities, from 40% (coronary artery bypass graft) to 100% (HIV). Agreement as measured by kappa statistic was strong for race (0.81) and age points (0.95), moderate for chronic health points (0.50) and HIV (0.66), and poor for Glasgow Coma Score (0.36). Intraclass correlation showed a moderate-high agreement for acute physiology points (0.88) and predicted probability of hospital mortality (0.71). Reliability for ICU diagnoses and elements of the APACHE II score is related to the objectivity of primary data in the medical charts.
The Perfect Marriage: Integrated Word Processing and Data Base Management Programs.
ERIC Educational Resources Information Center
Pogrow, Stanley
1983-01-01
Discussion of database integration and how it operates includes recommendations on compatible brand name word processing and database management programs, and a checklist for evaluating essential and desirable features of the available programs. (MBR)
of Expertise Customer service Technically savvy Event planning Word processing/desktop publishing Database management Research Interests Website design Database design Computational science Technology Consulting, Westminster, CO (2007-2012) Administrative Assistant, Source One Management, Denver, CO (2005
Teaching Case: Adapting the Access Northwind Database to Support a Database Course
ERIC Educational Resources Information Center
Dyer, John N.; Rogers, Camille
2015-01-01
A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-15
... administrator from the private sector to create and operate TV band databases. The TV band database... database administrator will be responsible for operation of their database and coordination of the overall functioning of the database with other administrators, and will provide database access to TVBDs. The...
... is the most reliable way to diagnose syringomyelia. Computer-generated radio waves and a powerful magnetic field ... a searchable database of current and past research projects supported by NIH and other federal agencies. RePORTER ...
Small Business Innovations (Integrated Database)
NASA Technical Reports Server (NTRS)
1992-01-01
Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.
Implementation of Quality Assurance and Quality Control Measures in the National Phenology Database
NASA Astrophysics Data System (ADS)
Gerst, K.; Rosemartin, A.; Denny, E. G.; Marsh, L.; Barnett, L.
2015-12-01
The USA National Phenology Network (USA-NPN; www.usanpn.org) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and environmental change. The National Phenology Database has over 5.5 million observation records for plants and animals for the period 1954-2015. These data have been used in a number of science, conservation and resource management applications, including national assessments of historical and potential future trends in phenology, regional assessments of spatio-temporal variation in organismal activity, and local monitoring for invasive species detection. Customizable data downloads are freely available, and data are accompanied by FGDC-compliant metadata, data-use and data-attribution policies, and vetted documented methodologies and protocols. The USA-NPN has implemented a number of measures to ensure both quality assurance and quality control. Here we describe the resources that have been developed so that incoming data submitted by both citizen and professional scientists are reliable; these include training materials, such as a botanical primer and species profiles. We also describe a number of automated quality control processes applied to incoming data streams to optimize data output quality. Existing and planned quality control measures for output of raw and derived data include: (1) Validation of site locations, including latitude, longitude, and elevation; (2) Flagging of records that conflict for a given date for an individual plant; (3) Flagging where species occur outside known ranges; (4) Flagging of records when phenophases occur outside of the plausible order for a species; (5) Flagging of records when intensity measures do not follow a plausible progression for a phenophase; (6) Flagging of records when a phenophase occurs outside of the plausible season, and (7) Quantification of precision and uncertainty for estimation of phenological metrics. Finally, we will describe preliminary work to develop methods for outlier detection that will inform plausibility checks. Ultimately we aim to maximize data quality of USA-NPN data and data products to ensure that this database can continue to be reliably applied for science and decision-making for multiple scales and applications.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1993-01-01
One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.
Evidence generation from healthcare databases: recommendations for managing change.
Bourke, Alison; Bate, Andrew; Sauer, Brian C; Brown, Jeffrey S; Hall, Gillian C
2016-07-01
There is an increasing reliance on databases of healthcare records for pharmacoepidemiology and other medical research, and such resources are often accessed over a long period of time so it is vital to consider the impact of changes in data, access methodology and the environment. The authors discuss change in communication and management, and provide a checklist of issues to consider for both database providers and users. The scope of the paper is database research, and changes are considered in relation to the three main components of database research: the data content itself, how it is accessed, and the support and tools needed to use the database. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Lancaster, Jeff; Dillard, Michael; Alves, Erin; Olofinboba, Olu
2014-01-01
The User Guide details the Access Database provided with the Flight Deck Interval Management (FIM) Display Elements, Information, & Annunciations program. The goal of this User Guide is to support ease of use and the ability to quickly retrieve and select items of interest from the Database. The Database includes FIM Concepts identified in a literature review preceding the publication of this document. Only items that are directly related to FIM (e.g., spacing indicators), which change or enable FIM (e.g., menu with control buttons), or which are affected by FIM (e.g., altitude reading) are included in the database. The guide has been expanded from previous versions to cover database structure, content, and search features with voiced explanations.
NASA Technical Reports Server (NTRS)
Shull, Sarah A.; Gralla, Erica L.; deWeck, Olivier L.; Shishko, Robert
2006-01-01
One of the major logistical challenges in human space exploration is asset management. This paper presents observations on the practice of asset management in support of human space flight to date and discusses a functional-based supply classification and a framework for an integrated database that could be used to improve asset management and logistics for human missions to the Moon, Mars and beyond.
NASA Technical Reports Server (NTRS)
Dobinson, E.
1982-01-01
General requirements for an information management system for the deep space network (DSN) are examined. A concise review of available database management system technology is presented. It is recommended that a federation of logically decentralized databases be implemented for the Network Information Management System of the DSN. Overall characteristics of the federation are specified, as well as reasons for adopting this approach.
Applications of Database Machines in Library Systems.
ERIC Educational Resources Information Center
Salmon, Stephen R.
1984-01-01
Characteristics and advantages of database machines are summarized and their applications to library functions are described. The ability to attach multiple hosts to the same database and flexibility in choosing operating and database management systems for different functions without loss of access to common database are noted. (EJS)
Similarity Search in Large Collections of Biometric Data
2009-10-01
instantaneous identification of a person by converting the biometric into a digital form and then comparing it against a computerized database . They can...combined to get reliable results. Exact match in biometric collections have very little meaning and only a relative ordering of database objects with...running several indices for different aspects of the data, e.g. facial features, fingerprints and palmprints of a person, together. The system then
1994-06-30
tip Opening Displacement (CTOD) Fracture Toughness Measurement". 48 The method has found application in the elastic-plastic fracture mechanics ( EPFM ...68 6.1 Proposed Material Property Database Format and Hierarchy .............. 68 6.2 Sample Application of the Material Property Database...the E 49.05 sub-committee. The relevant quality indicators applicable to the present program are: source of data, statistical basis of data
Power Electronics and Electric Machines | Transportation Research | NREL
-to resource for information from cutting-edge thermal management research, making wide-scale adoption battery, the motor, and other powertrain components. NREL's thermal management and reliability research is thermal management technologies to improve performance, cost, and reliability for power electronics and
Heterogeneous distributed query processing: The DAVID system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1985-01-01
The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.
Measures of Financial Capacity: A Review.
Ghesquiere, Angela R; McAfee, Caitlin; Burnett, Jason
2017-05-23
Capacity to manage finances and make financial decisions can affect risk for financial exploitation and is often the basis for legal determinations of conservatorship/guardianship. Several structured assessments of financial capacity have been developed, but have not been compared regarding their focus, validity, or reliability. Therefore, we conducted a review of financial capacity measures to examine these factors. We searched electronic databases, reference lists in identified articles, conference proceedings and other grey literature for measures of financial capacity. We then extracted data on the length and domains of each measure, the population for which they were intended, and their validity and reliability. We identified 10 structured measures of financial capacity. Most measures could be completed in 25-30 min, and were designed to be administered to older adults with some level of cognitive impairment. Reliability and validity were high for most. Measurement of financial capacity is complex and multidimensional. When selecting a measure of financial capacity, consideration should be made of the population of focus and the domains of capacity to be assessed. More work is needed on the cultural sensitivity of financial capacity measures, their acceptability, and their use in clinical work. Better understanding of when, and to whom, to administer different financial capacity measures could enhance the ability to accurately detect those suffering from impaired financial capacity, and prevent related negative outcomes like financial exploitation. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOT National Transportation Integrated Search
2005-01-01
In 2003, an Internet-based Geotechnical Database Management System (GDBMS) was developed for the Virginia Department of Transportation (VDOT) using distributed Geographic Information System (GIS) methodology for data management, archival, retrieval, ...
Improving Recall Using Database Management Systems: A Learning Strategy.
ERIC Educational Resources Information Center
Jonassen, David H.
1986-01-01
Describes the use of microcomputer database management systems to facilitate the instructional uses of learning strategies relating to information processing skills, especially recall. Two learning strategies, cross-classification matrixing and node acquisition and integration, are highlighted. (Author/LRW)
Using Online Databases in Corporate Issues Management.
ERIC Educational Resources Information Center
Thomsen, Steven R.
1995-01-01
Finds that corporate public relations practitioners felt they were able, using online database and information services, to intercept issues earlier in the "issue cycle" and thus enable their organizations to develop more "proactionary" or "catalytic" issues management repose strategies. (SR)
The workforce trends of nurses in Lebanon (2009-2014): A registration database analysis.
Alameddine, Mohamad; Chamoun, Nariman; Btaiche, Rachel; El Arnaout, Nour; Richa, Nathalie; Samaha-Nuwayhid, Helen
2017-01-01
Analysis of the nursing registration databases is a highly informative approach that provides accurate and reliable information supporting evidence based decisions relevant to the nursing workforce planning, management and development. This study presents the first systematic analysis of the nursing registration database in Lebanon. It Reports on the workforce distribution and trends using an updated version of the Order of Nurses in Lebanon (ONL) databases. This study presents a secondary data analysis of a de-identified subset of the updated ONL registration database. The workforce participation status of ONL registered nurses was categorized as active and eligible. For active nurses sectors and sub-sectors of employment were defined. Eligible nurses were categorized as unemployed, working outside nursing and working abroad. SPSS was used to conduct descriptive analysis to present workforce trends of Lebanese nurses for year 2009-2014 as frequencies, percentages and percentage changes. Increases in the size of the Active (35%) and Eligible (86%) nurses were observed over the past six years. The majority of nurses fell in the below 35 years age group (60% in 2014). The hospital sector remained the principle employer, with 87% of Lebanese nurses working in hospitals in 2014. A 173% increases was reported for nurses working abroad. Despite the growth of the Active nursing workforce, the skewed distribution of nurses in the below 35 age group and the growth in the Eligible category, especially for nurses living abroad, raise concerns on the longevity of nurses in the profession and the reasons for their attrition from the workforce. There is a need to investigate the push and pull factors that are affecting nurses and the design of policies and interventions that would encourage nurses to remain active in Lebanon. Furthermore, policies and interventions that would create employment opportunities outside hospitals, especially in the Community sector, are recommended.
The workforce trends of nurses in Lebanon (2009–2014): A registration database analysis
Chamoun, Nariman; Btaiche, Rachel; El Arnaout, Nour; Richa, Nathalie; Samaha-Nuwayhid, Helen
2017-01-01
Background Analysis of the nursing registration databases is a highly informative approach that provides accurate and reliable information supporting evidence based decisions relevant to the nursing workforce planning, management and development. This study presents the first systematic analysis of the nursing registration database in Lebanon. It Reports on the workforce distribution and trends using an updated version of the Order of Nurses in Lebanon (ONL) databases. Methods This study presents a secondary data analysis of a de-identified subset of the updated ONL registration database. The workforce participation status of ONL registered nurses was categorized as active and eligible. For active nurses sectors and sub-sectors of employment were defined. Eligible nurses were categorized as unemployed, working outside nursing and working abroad. SPSS was used to conduct descriptive analysis to present workforce trends of Lebanese nurses for year 2009–2014 as frequencies, percentages and percentage changes. Results Increases in the size of the Active (35%) and Eligible (86%) nurses were observed over the past six years. The majority of nurses fell in the below 35 years age group (60% in 2014). The hospital sector remained the principle employer, with 87% of Lebanese nurses working in hospitals in 2014. A 173% increases was reported for nurses working abroad. Discussion Despite the growth of the Active nursing workforce, the skewed distribution of nurses in the below 35 age group and the growth in the Eligible category, especially for nurses living abroad, raise concerns on the longevity of nurses in the profession and the reasons for their attrition from the workforce. Conclusion There is a need to investigate the push and pull factors that are affecting nurses and the design of policies and interventions that would encourage nurses to remain active in Lebanon. Furthermore, policies and interventions that would create employment opportunities outside hospitals, especially in the Community sector, are recommended. PMID:28800618
Metnitz, P G; Laback, P; Popow, C; Laback, O; Lenz, K; Hiesmayr, M
1995-01-01
Patient Data Management Systems (PDMS) for ICUs collect, present and store clinical data. Various intentions make analysis of those digitally stored data desirable, such as quality control or scientific purposes. The aim of the Intensive Care Data Evaluation project (ICDEV), was to provide a database tool for the analysis of data recorded at various ICUs at the University Clinics of Vienna. General Hospital of Vienna, with two different PDMSs used: CareVue 9000 (Hewlett Packard, Andover, USA) at two ICUs (one medical ICU and one neonatal ICU) and PICIS Chart+ (PICIS, Paris, France) at one Cardiothoracic ICU. CONCEPT AND METHODS: Clinically oriented analysis of the data collected in a PDMS at an ICU was the beginning of the development. After defining the database structure we established a client-server based database system under Microsoft Windows NI and developed a user friendly data quering application using Microsoft Visual C++ and Visual Basic; ICDEV was successfully installed at three different ICUs, adjustment to the different PDMS configurations were done within a few days. The database structure developed by us enables a powerful query concept representing an 'EXPERT QUESTION COMPILER' which may help to answer almost any clinical questions. Several program modules facilitate queries at the patient, group and unit level. Results from ICDEV-queries are automatically transferred to Microsoft Excel for display (in form of configurable tables and graphs) and further processing. The ICDEV concept is configurable for adjustment to different intensive care information systems and can be used to support computerized quality control. However, as long as there exists no sufficient artifact recognition or data validation software for automatically recorded patient data, the reliability of these data and their usage for computer assisted quality control remain unclear and should be further studied.
[Integrated DNA barcoding database for identifying Chinese animal medicine].
Shi, Lin-Chun; Yao, Hui; Xie, Li-Fang; Zhu, Ying-Jie; Song, Jing-Yuan; Zhang, Hui; Chen, Shi-Lin
2014-06-01
In order to construct an integrated DNA barcoding database for identifying Chinese animal medicine, the authors and their cooperators have completed a lot of researches for identifying Chinese animal medicines using DNA barcoding technology. Sequences from GenBank have been analyzed simultaneously. Three different methods, BLAST, barcoding gap and Tree building, have been used to confirm the reliabilities of barcode records in the database. The integrated DNA barcoding database for identifying Chinese animal medicine has been constructed using three different parts: specimen, sequence and literature information. This database contained about 800 animal medicines and the adulterants and closely related species. Unknown specimens can be identified by pasting their sequence record into the window on the ID page of species identification system for traditional Chinese medicine (www. tcmbarcode. cn). The integrated DNA barcoding database for identifying Chinese animal medicine is significantly important for animal species identification, rare and endangered species conservation and sustainable utilization of animal resources.
An incremental database access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, Nicholas; Sellis, Timos
1994-01-01
We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
Questioning reliability assessments of health information on social media.
Dalmer, Nicole K
2017-01-01
This narrative review examines assessments of the reliability of online health information retrieved through social media to ascertain whether health information accessed or disseminated through social media should be evaluated differently than other online health information. Several medical, library and information science, and interdisciplinary databases were searched using terms relating to social media, reliability, and health information. While social media's increasing role in health information consumption is recognized, studies are dominated by investigations of traditional (i.e., non-social media) sites. To more richly assess constructions of reliability when using social media for health information, future research must focus on health consumers' unique contexts, virtual relationships, and degrees of trust within their social networks.
PACSY, a relational database management system for protein structure and chemical shift analysis.
Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L
2012-10-01
PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.
Yang, Qi; Franco, Christopher M M; Sorokin, Shirley J; Zhang, Wei
2017-02-02
For sponges (phylum Porifera), there is no reliable molecular protocol available for species identification. To address this gap, we developed a multilocus-based Sponge Identification Protocol (SIP) validated by a sample of 37 sponge species belonging to 10 orders from South Australia. The universal barcode COI mtDNA, 28S rRNA gene (D3-D5), and the nuclear ITS1-5.8S-ITS2 region were evaluated for their suitability and capacity for sponge identification. The highest Bit Score was applied to infer the identity. The reliability of SIP was validated by phylogenetic analysis. The 28S rRNA gene and COI mtDNA performed better than the ITS region in classifying sponges at various taxonomic levels. A major limitation is that the databases are not well populated and possess low diversity, making it difficult to conduct the molecular identification protocol. The identification is also impacted by the accuracy of the morphological classification of the sponges whose sequences have been submitted to the database. Re-examination of the morphological identification further demonstrated and improved the reliability of sponge identification by SIP. Integrated with morphological identification, the multilocus-based SIP offers an improved protocol for more reliable and effective sponge identification, by coupling the accuracy of different DNA markers.
Yang, Qi; Franco, Christopher M. M.; Sorokin, Shirley J.; Zhang, Wei
2017-01-01
For sponges (phylum Porifera), there is no reliable molecular protocol available for species identification. To address this gap, we developed a multilocus-based Sponge Identification Protocol (SIP) validated by a sample of 37 sponge species belonging to 10 orders from South Australia. The universal barcode COI mtDNA, 28S rRNA gene (D3–D5), and the nuclear ITS1-5.8S-ITS2 region were evaluated for their suitability and capacity for sponge identification. The highest Bit Score was applied to infer the identity. The reliability of SIP was validated by phylogenetic analysis. The 28S rRNA gene and COI mtDNA performed better than the ITS region in classifying sponges at various taxonomic levels. A major limitation is that the databases are not well populated and possess low diversity, making it difficult to conduct the molecular identification protocol. The identification is also impacted by the accuracy of the morphological classification of the sponges whose sequences have been submitted to the database. Re-examination of the morphological identification further demonstrated and improved the reliability of sponge identification by SIP. Integrated with morphological identification, the multilocus-based SIP offers an improved protocol for more reliable and effective sponge identification, by coupling the accuracy of different DNA markers. PMID:28150727
Reflective Database Access Control
ERIC Educational Resources Information Center
Olson, Lars E.
2009-01-01
"Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…
Assessing Natural Hazard Vulnerability Through Marmara Region Using GIS
NASA Astrophysics Data System (ADS)
Sabuncu, A.; Garagon Dogru, A.; Ozener, H.
2013-12-01
Natural hazards are natural phenomenon occured in the Earth's system that include geological and meteorological events such as earthquakes, floods, landslides, droughts, fires and tsunamis. The metropolitan cities are vulnerable to natural hazards due to their population densities, industrial facilities and proporties. The urban layout of the megacities are complex since industrial facilities are interference with residential area. The Marmara region is placed in North-western Turkey suffered from natural hazards (earthquakes, floods etc.) for years. After 1999 Kocaeli and Duzce earthquakes and 2009 Istanbul flash floods, dramatic number of casualities and economic losses were reported by the authorities. Geographic information systems (GIS) have substantial capacity in order to develop natural disaster management. As these systems provide more efficient and reliable analysis and evaluation of the data in the management, and also convenient and better solutions for the decision making before during and after the natural hazards. The Earth science data and socio-economic data can be integrated into a GIS as different layers. Additionally, satellite data are used to understand the changes pre and post the natural hazards. GIS is a powerful software for the combination of different type of digital data. A natural hazard database for the Marmara region provides all different types of digital data to the users. All proper data collection processing and analysing are critical to evaluate and identify hazards. The natural hazard database allows users to monitor, analyze and query past and recent disasters in the Marmara Region. The long term aim of this study is to develop geodatabase and identify the natural hazard vulnerabilities of the metropolitan cities.
Soetens, Oriane; De Bel, Annelies; Echahidi, Fedoua; Vancutsem, Ellen; Vandoorslaer, Kristof; Piérard, Denis
2012-01-01
The performance of matrix-assisted laser desorption–ionization time of flight mass spectrometry (MALDI-TOF MS) for species identification of Prevotella was evaluated and compared with 16S rRNA gene sequencing. Using a Bruker database, 62.7% of the 102 clinical isolates were identified to the species level and 73.5% to the genus level. Extension of the commercial database improved these figures to, respectively, 83.3% and 89.2%. MALDI-TOF MS identification of Prevotella is reliable but needs a more extensive database. PMID:22301022
Iris indexing based on local intensity order pattern
NASA Astrophysics Data System (ADS)
Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo
2017-03-01
In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.
Overview of Historical Earthquake Document Database in Japan and Future Development
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Satake, K.
2014-12-01
In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami heights.
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud
Karimi, Kamran; Vize, Peter D.
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782
Fayed, Reham; Hamza, Dina; Abdallah, Heba; Kelany, Mohamed; Tahseen, Amira; Aref, Adel T
2017-01-01
Breast cancer is the most common cancer among females worldwide in general and in the Middle East and the North African region (MENA region) in particular. Management of breast cancer in the MENA region faces a lot of challenges, which include younger age at presentation, aggressive behaviour, lack of national breast screening programmes and lack of reliable data registries as well as socioeconomic factors. These factors make applying the international guidelines for breast cancer management very challenging. The aim of this project is to explore the need for a regional breast cancer guideline as well as to screen the clinical practice of breast cancer management in the MENA region. Three web-based designed surveys were sent to more than 600 oncologists in the MENA region from the period of August 2013 to October 2014. Full descriptive data and information regarding the application of international breast cancer guidelines were collected. The software was using the IP address to prevent duplication of collected data. Descriptive analysis and results were shown as numbers and percentages. During the period of the survey, 104 oncologists responded, representing around an 11% response rate. The majority of replies came from Egypt (59 responses (59%)), followed by Saudi Arabia (ten responses (9.6%)). Fifty-one per cent of responders had more than ten years of experience, and further 31.7% had 5-10 years of experience. Seventy-four per cent were working in governmental hospitals, which is our target sector. There was a major defect in having a genetic counsel unit (78.8% declared an absence of this service), presence of a national breast screening programme (55.8% declared an absence of this service), performing sentinel lymph node biopsy (43.3% declared an absence of this service). The need for regional guidelines for the management of breast cancer was agreed upon by 90.6% of responders. There is a clear need to improve the management of breast cancer in the MENA region. Creating a national breast screening programme and a reliable database is essential. A regional guideline is required to establish the best possible management of breast cancer according to the patients and disease specification as well as the regional socioeconomic factors and facilities available. There is also a need to improve clinical research that meets the region's needs.
Nishio, Shin-Ya; Usami, Shin-Ichi
2017-03-01
Recent advances in next-generation sequencing (NGS) have given rise to new challenges due to the difficulties in variant pathogenicity interpretation and large dataset management, including many kinds of public population databases as well as public or commercial disease-specific databases. Here, we report a new database development tool, named the "Clinical NGS Database," for improving clinical NGS workflow through the unified management of variant information and clinical information. This database software offers a two-feature approach to variant pathogenicity classification. The first of these approaches is a phenotype similarity-based approach. This database allows the easy comparison of the detailed phenotype of each patient with the average phenotype of the same gene mutation at the variant or gene level. It is also possible to browse patients with the same gene mutation quickly. The other approach is a statistical approach to variant pathogenicity classification based on the use of the odds ratio for comparisons between the case and the control for each inheritance mode (families with apparently autosomal dominant inheritance vs. control, and families with apparently autosomal recessive inheritance vs. control). A number of case studies are also presented to illustrate the utility of this database. © 2016 The Authors. **Human Mutation published by Wiley Periodicals, Inc.
Mitchell, Amy E; Fraser, Jennifer A
2011-02-01
Support and education for parents faced with managing a child with atopic dermatitis is crucial to the success of current treatments. Interventions aiming to improve parent management of this condition are promising. Unfortunately, evaluation is hampered by lack of precise research tools to measure change. To develop a suite of valid and reliable research instruments to appraise parents' self-efficacy for performing atopic dermatitis management tasks; outcome expectations of performing management tasks; and self-reported task performance in a community sample of parents of children with atopic dermatitis. The Parents' Eczema Management Scale (PEMS) and the Parents' Outcome Expectations of Eczema Management Scale (POEEMS) were developed from an existing self-efficacy scale, the Parental Self-Efficacy with Eczema Care Index (PASECI). Each scale was presented in a single self-administered questionnaire, to measure self-efficacy, outcome expectations, and self-reported task performance related to managing child atopic dermatitis. Each was tested with a community sample of parents of children with atopic dermatitis, and psychometric evaluation of the scales' reliability and validity was conducted. A community-based convenience sample of 120 parents of children with atopic dermatitis completed the self-administered questionnaire. Participants were recruited through schools across Australia. Satisfactory internal consistency and test-retest reliability was demonstrated for all three scales. Construct validity was satisfactory, with positive relationships between self-efficacy for managing atopic dermatitis and general perceived self-efficacy; self-efficacy for managing atopic dermatitis and self-reported task performance; and self-efficacy for managing atopic dermatitis and outcome expectations. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child's symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of atopic dermatitis by the parent, involving healthcare professionals in management, and involving the child in the management of atopic dermatitis were found. Parents' self-efficacy and outcome expectations had a significant influence on self-reported task performance. Findings suggest that PEMS and POEEMS are valid and reliable instruments worthy of further psychometric evaluation. Likewise, validity and reliability of PASECI was confirmed. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ebersole, M. M.
1983-01-01
JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.
Are divided attention tasks useful in the assessment and management of sport-related concussion?
Register-Mihalik, Johna K; Littleton, Ashley C; Guskiewicz, Kevin M
2013-12-01
This article is a systematic review of the literature on divided attention assessment inclusive of a cognitive and motor task (balance or gait) for use in concussion management. The systematic review drew from published papers listed in PubMed, MEDLINE, EMBASE and CINAHL databases. The search identified 19 empirical research papers meeting the inclusion criteria. Study results were considered for the psychometric properties of the paradigms, the influence of divided attention on measures of cognition and postural control and the comparison of divided attention task outcomes between individuals with concussion and healthy controls (all samples were age 17 years or older). The review highlights that the reliability of the tasks under a divided attention paradigm presented ranges from low to high (ICC: 0.1-0.9); however, only 3/19 articles included psychometric information. Response times are greater, gait strategies are less efficient, and postural control deficits are greater in concussed participants compared with healthy controls both immediately and for some period following concussive injury, specifically under divided attention conditions. Dual task assessments in some cases were more reliable than single task assessments and may be better able to detect lingering effects following concussion. Few of the studies have been replicated and applied across various age groups. A key limitation of these studies is that many include laboratory and time-intensive measures. Future research is needed to refine a time and cost efficient divided attention assessment paradigm, and more work is needed in younger (pre-teens) populations where the application may be of greatest utility.
Gragnano, Andrea; Miglioretti, Massimo; Frings-Dresen, Monique H W; de Boer, Angela G E M
2017-08-01
This study presented the construct of Work-Health Balance (WHB) and the design and validation of the Work-Health Balance Questionnaire (WHBq). More and more workers have a long-standing health problem or disability (LSHPD). The management of health needs and work demands is crucial for the quality of working life and work retention of these workers. However, no instrument exists measuring this process. The WHBq assesses key factors in the process of adjusting between health needs and work demands. We tested the reliability and validity of 38 items with cross-sectional data from a sample of 321 Italian workers (mean age = 45 ± 11 years) using exploratory factor analysis (EFA), Rasch analyses, and the correlations with other relevant variables. The instrument ultimately consisted of 17 items that reliably measured three factors: work-health incompatibility, health climate, and external support. These dimensions were associated with well-being in the workplace, dysfunctional behaviors at work, and general psychological health. A higher level on the WHB index was associated with lower levels of presenteeism, emotional exhaustion, workaholism, and psychological distress and with higher levels of job satisfaction and work engagement, supporting the construct validity of the instrument. The WHBq shows good psychometric characteristics and strong and theoretically consistent relationships with important and well-known variables. These results make the WHBq a promising tool in the study and management of health of employees, especially for the work continuation of employees returning to work with LSHPD. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Environment/Health/Safety (EHS): Databases
Hazard Documents Database Biosafety Authorization System CATS (Corrective Action Tracking System) (for findings 12/2005 to present) Chemical Management System Electrical Safety Ergonomics Database (for new Learned / Best Practices REMS - Radiation Exposure Monitoring System SJHA Database - Subcontractor Job
Implementation of Risk Management in NASA's CEV Project- Ensuring Mission Success
NASA Astrophysics Data System (ADS)
Perera, Jeevan; Holsomback, Jerry D.
2005-12-01
Most project managers know that Risk Management (RM) is essential to good project management. At NASA, standards and procedures to manage risk through a tiered approach have been developed - from the global agency-wide requirements down to a program or project's implementation. The basic methodology for NASA's risk management strategy includes processes to identify, analyze, plan, track, control, communicate and document risks. The identification, characterization, mitigation plan, and mitigation responsibilities associated with specific risks are documented to help communicate, manage, and effectuate appropriate closure. This approach helps to ensure more consistent documentation and assessment and provides a means of archiving lessons learned for future identification or mitigation activities.A new risk database and management tool was developed by NASA in 2002 and since has been used successfully to communicate, document and manage a number of diverse risks for the International Space Station, Space Shuttle, and several other NASA projects and programs including at the Johnson Space Center. Organizations use this database application to effectively manage and track each risk and gain insight into impacts from other organization's viewpoint to develop integrated solutions. Schedule, cost, technical and safety issues are tracked in detail through this system.Risks are tagged within the system to ensure proper review, coordination and management at the necessary management level. The database is intended as a day-to- day tool for organizations to manage their risks and elevate those issues that need coordination from above. Each risk is assigned to a managing organization and a specific risk owner who generates mitigation plans as appropriate. In essence, the risk owner is responsible for shepherding the risk through closure. The individual that identifies a new risk does not necessarily get assigned as the risk owner. Whoever is in the best position to effectuate comprehensive closure is assigned as the risk owner. Each mitigation plan includes the specific tasks that will be conducted to either decrease the likelihood of the risk occurring and/or lessen the severity of the consequences if they do occur. As each mitigation task is completed, the responsible managing organization records the completion of the task in the risk database and then re-scores the risk considering the task's results. By keeping scores updated, a managing organization's current top risks and risk posture can be readily identified including the status of any risk in the system.A number of metrics measure risk process trends from data contained in the database. This allows for trend analysis to further identify improvements to the process and assist in the management of all risks. The metrics will also scrutinize both the effectiveness and compliance of risk management requirements.The risk database is an evolving tool and will be continuously improved with capabilities requested by the NASA project community. This paper presents the basic foundations of risk management, the elements necessary for effective risk management, and the capabilities of this new risk database and how it is implemented to support NASA's risk management needs.
Emergency and Disaster Information Service
NASA Astrophysics Data System (ADS)
Boszormenyi, Zsolt
2010-05-01
The Hungarian National Association of Radio Distress-Signalling and Infocommunications (RSOE) operates Emergency and Disaster Information Service (EDIS) within the frame of its own website which has the objective to monitor and document all the events on the Earth which may cause disaster or emergency. Our service is using the speed and the data spectrum of the internet to gather information. We are monitoring and processing several foreign organisation's data to get quick and certified information. The EDIS website operated together by the General-Directorate of National Disaster Management (OKF) and RSOE, in co-operation with the Crisis Management Centre of the Ministry of Foreign Affairs, provides useful information regarding emergency situations and their prevention. Extraordinary events happening in Hungary, Europe and other areas of the World are being monitored in 24 hours per day. All events processed by RSOE EDIS are displayed real time - for the sake of international compatibility - according to the CAP protocol on a secure website. To ensure clear transparency all events are categorized separately in the RSS directory (e.g. earthquake, fire, flood, landslide, nuclear event, tornado, vulcano). RSOE EDIS also contributes in dissemination of the CAP protocol in Hungary. Beside the official information, with the help of special programs nearly 900-1000 internet press publication will be monitored and the publication containing predefined keywords will be processed. However, these "news" cannot be considered as official and reliable information, but many times we have learnt critical information from the internet press. We are screening the incoming information and storing in a central database sorted by category. After processing the information we are sending it immediately via E-Mail (or other format) for the organisations and persons who have requested it (e.g. National Disaster Management, United Nations etc.). We are aspiring that the processed data will be validated and reliable in all cases, to avoid the possible panic situation caused by unreal information. That is why we are trying to create and keep contact with all organisations, which can provide validated information for us, to operate the RSOE EDIS. Certainly we are publishing all incoming data and information at our website to provide up-to-date information to the citizens as well as we are publishing useful knowledge for them. We have a knowledge database, which contains all necessary information, which can help the citizens in an emergency situation. For the prevention and the most relevant information we are willing to amend our published data with the population information.
JPEG2000 and dissemination of cultural heritage over the Internet.
Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos
2004-03-01
By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kellie, C.L.
This plan establishes the integrated management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford Site Technical Baseline.
Developing a Nursing Database System in Kenya
Riley, Patricia L; Vindigni, Stephen M; Arudo, John; Waudo, Agnes N; Kamenju, Andrew; Ngoya, Japheth; Oywer, Elizabeth O; Rakuom, Chris P; Salmon, Marla E; Kelley, Maureen; Rogers, Martha; St Louis, Michael E; Marum, Lawrence H
2007-01-01
Objective To describe the development, initial findings, and implications of a national nursing workforce database system in Kenya. Principal Findings Creating a national electronic nursing workforce database provides more reliable information on nurse demographics, migration patterns, and workforce capacity. Data analyses are most useful for human resources for health (HRH) planning when workforce capacity data can be linked to worksite staffing requirements. As a result of establishing this database, the Kenya Ministry of Health has improved capability to assess its nursing workforce and document important workforce trends, such as out-migration. Current data identify the United States as the leading recipient country of Kenyan nurses. The overwhelming majority of Kenyan nurses who elect to out-migrate are among Kenya's most qualified. Conclusions The Kenya nursing database is a first step toward facilitating evidence-based decision making in HRH. This database is unique to developing countries in sub-Saharan Africa. Establishing an electronic workforce database requires long-term investment and sustained support by national and global stakeholders. PMID:17489921
Pritoni, Marco; Ford, Rebecca; Karlin, Beth; Sanguinetti, Angela
2018-02-01
Policymakers worldwide are currently discussing whether to include home energy management (HEM) products in their portfolio of technologies to reduce carbon emissions and improve grid reliability. However, very little data is available about these products. Here we present the results of an extensive review including 308 HEM products available on the US market in 2015-2016. We gathered these data from publicly available sources such as vendor websites, online marketplaces and other vendor documents. A coding guide was developed iteratively during the data collection and utilized to classify the devices. Each product was coded based on 96 distinct attributes, grouped into 11 categories: Identifying information, Product components, Hardware, Communication, Software, Information - feedback, Information - feedforward, Control, Utility interaction, Additional benefits and Usability. The codes describe product features and functionalities, user interaction and interoperability with other devices. A mix of binary attributes and more descriptive codes allow to sort and group data without losing important qualitative information. The information is stored in a large spreadsheet included with this article, along with an explanatory coding guide. This dataset is analyzed and described in a research article entitled "Categories and functionality of smart home technology for energy management" (Ford et al., 2017) [1].
TRENDS: The aeronautical post-test database management system
NASA Technical Reports Server (NTRS)
Bjorkman, W. S.; Bondi, M. J.
1990-01-01
TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.
DOE technology information management system database study report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widing, M.A.; Blodgett, D.W.; Braun, M.D.
1994-11-01
To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browne, S.V.; Green, S.C.; Moore, K.
1994-04-01
The Netlib repository, maintained by the University of Tennessee and Oak Ridge National Laboratory, contains freely available software, documents, and databases of interest to the numerical, scientific computing, and other communities. This report includes both the Netlib User`s Guide and the Netlib System Manager`s Guide, and contains information about Netlib`s databases, interfaces, and system implementation. The Netlib repository`s databases include the Performance Database, the Conferences Database, and the NA-NET mail forwarding and Whitepages Databases. A variety of user interfaces enable users to access the Netlib repository in the manner most convenient and compatible with their networking capabilities. These interfaces includemore » the Netlib email interface, the Xnetlib X Windows client, the netlibget command-line TCP/IP client, anonymous FTP, anonymous RCP, and gopher.« less
Concierge: Personal Database Software for Managing Digital Research Resources
Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro
2007-01-01
This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800
Assessing reliability and validity measures in managed care studies.
Montoya, Isaac D
2003-01-01
To review the reliability and validity literature and develop an understanding of these concepts as applied to managed care studies. Reliability is a test of how well an instrument measures the same input at varying times and under varying conditions. Validity is a test of how accurately an instrument measures what one believes is being measured. A review of reliability and validity instructional material was conducted. Studies of managed care practices and programs abound. However, many of these studies utilize measurement instruments that were developed for other purposes or for a population other than the one being sampled. In other cases, instruments have been developed without any testing of the instrument's performance. The lack of reliability and validity information may limit the value of these studies. This is particularly true when data are collected for one purpose and used for another. The usefulness of certain studies without reliability and validity measures is questionable, especially in cases where the literature contradicts itself
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-09-01
Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of 'Theory of Mind' AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. METHODological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability.
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-01-01
Objective: Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. Method: We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability. PMID:27006666
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-03
... DEPARTMENT OF COMMERCE Minority Business Development Agency Proposed Information Collection; Comment Request; Online Customer Relationship Management (CRM)/Performance Databases, the Online Phoenix... of program goals via the Online CRM/Performance Databases. The data collected through the Online CRM...
Implementing a Microcomputer Database Management System.
ERIC Educational Resources Information Center
Manock, John J.; Crater, K. Lynne
1985-01-01
Current issues in selecting, structuring, and implementing microcomputer database management systems in research administration offices are discussed, and their capabilities are illustrated with the system used by the University of North Carolina at Wilmington. Trends in microcomputer technology and their likely impact on research administration…
Clinical Databases for Chest Physicians.
Courtwright, Andrew M; Gabriel, Peter E
2018-04-01
A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Viegas, F.; Malon, D.; Cranshaw, J.; Dimitrov, G.; Nowak, M.; Nairz, A.; Goossens, L.; Gallas, E.; Gamboa, C.; Wong, A.; Vinek, E.
2010-04-01
The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.
PACSY, a relational database management system for protein structure and chemical shift analysis
Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo
2012-01-01
PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636
A UML Profile for Developing Databases that Conform to the Third Manifesto
NASA Astrophysics Data System (ADS)
Eessaar, Erki
The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.
2009-12-01
Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.
Bridge element deterioration rates.
DOT National Transportation Integrated Search
2008-10-01
This report describes the development of bridge element deterioration rates using the NYSDOT : bridge inspection database using Markov chains and Weibull-based approaches. It is observed : that Weibull-based approach is more reliable for developing b...
[Relational database for urinary stone ambulatory consultation. Assessment of initial outcomes].
Sáenz Medina, J; Páez Borda, A; Crespo Martinez, L; Gómez Dos Santos, V; Barrado, C; Durán Poveda, M
2010-05-01
To create a relational database for monitoring lithiasic patients. We describe the architectural details and the initial results of the statistical analysis. Microsoft Access 2002 was used as template. Four different tables were constructed to gather demographic data (table 1), clinical and laboratory findings (table 2), stone features (table 3) and therapeutic approach (table 4). For a reliability analysis of the database the number of correctly stored data was gathered. To evaluate the performance of the database, a prospective analysis was conducted, from May 2004 to August 2009, on 171 stone free patients after treatment (EWSL, surgery or medical) from a total of 511 patients stored in the database. Lithiasic status (stone free or stone relapse) was used as primary end point, while demographic factors (age, gender), lithiasic history, upper urinary tract alterations and characteristics of the stone (side, location, composition and size) were considered as predictive factors. An univariate analysis was conducted initially by chi square test and supplemented by Kaplan Meier estimates for time to stone recurrence. A multiple Cox proportional hazards regression model was generated to jointly assess the prognostic value of the demographic factors and the predictive value of stones characteristics. For the reliability analysis 22,084 data were available corresponding to 702 consultations on 511 patients. Analysis of data showed a recurrence rate of 85.4% (146/171, median time to recurrence 608 days, range 70-1758). In the univariate and multivariate analysis, none of the factors under consideration had a significant effect on recurrence rate (p=ns). The relational database is useful for monitoring patients with urolithiasis. It allows easy control and update, as well as data storage for later use. The analysis conducted for its evaluation showed no influence of demographic factors and stone features on stone recurrence.
Region 7 Laboratory Information Management System
This is metadata documentation for the Region 7 Laboratory Information Management System (R7LIMS) which maintains records for the Regional Laboratory. Any Laboratory analytical work performed is stored in this system which replaces LIMS-Lite, and before that LAST. The EPA and its contractors may use this database. The Office of Policy & Management (PLMG) Division at EPA Region 7 is the primary managing entity; contractors can access this database but it is not accessible to the public.
Creative Classroom Assignment Through Database Management.
ERIC Educational Resources Information Center
Shah, Vivek; Bryant, Milton
1987-01-01
The Faculty Scheduling System (FSS), a database management system designed to give administrators the ability to schedule faculty in a fast and efficient manner is described. The FSS, developed using dBASE III, requires an IBM compatible microcomputer with a minimum of 256K memory. (MLW)
A Database Management System for Interlibrary Loan.
ERIC Educational Resources Information Center
Chang, Amy
1990-01-01
Discusses the increasing complexity of dealing with interlibrary loan requests and describes a database management system for interlibrary loans used at Texas Tech University. System functions are described, including file control, records maintenance, and report generation, and the impact on staff productivity is discussed. (CLB)
DOT National Transportation Integrated Search
2006-05-01
Specific objectives of the Peer Exchange were: : Discuss and exchange information about databases and other software : used to support the program-cycles managed by state transportation : research offices. Elements of the program cycle include: :...
78 FR 23685 - Airworthiness Directives; The Boeing Company
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-22
... installing new operational software for the electrical load management system and configuration database. The..., installing a new electrical power control panel, and installing new operational software for the electrical load management system and configuration database. Since the proposed AD was issued, we have received...
Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements
NASA Technical Reports Server (NTRS)
Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri
2006-01-01
NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.
Implementation of an open adoption research data management system for clinical studies.
Müller, Jan; Heiss, Kirsten Ingmar; Oberhoffer, Renate
2017-07-06
Research institutions need to manage multiple studies with individual data sets, processing rules and different permissions. So far, there is no standard technology that provides an easy to use environment to create databases and user interfaces for clinical trials or research studies. Therefore various software solutions are being used-from custom software, explicitly designed for a specific study, to cost intensive commercial Clinical Trial Management Systems (CTMS) up to very basic approaches with self-designed Microsoft ® databases. The technology applied to conduct those studies varies tremendously from study to study, making it difficult to evaluate data across various studies (meta-analysis) and keeping a defined level of quality in database design, data processing, displaying and exporting. Furthermore, the systems being used to collect study data are often operated redundantly to systems used in patient care. As a consequence the data collection in studies is inefficient and data quality may suffer from unsynchronized datasets, non-normalized database scenarios and manually executed data transfers. With OpenCampus Research we implemented an open adoption software (OAS) solution on an open source basis, which provides a standard environment for state-of-the-art research database management at low cost.
Everett, Tobias C; Ng, Elaine; Power, Daniel; Marsh, Christopher; Tolchard, Stephen; Shadrina, Anna; Bould, Matthew D
2013-12-01
The use of simulation-based assessments for high-stakes physician examinations remains controversial. The Managing Emergencies in Paediatric Anaesthesia course uses simulation to teach evidence-based management of anesthesia crises to trainee anesthetists in the United Kingdom (UK) and Canada. In this study, we investigated the feasibility and reliability of custom-designed scenario-specific performance checklists and a global rating scale (GRS) assessing readiness for independent practice. After research ethics board approval, subjects were videoed managing simulated pediatric anesthesia crises in a single Canadian teaching hospital. Each subject was randomized to two of six different scenarios. All 60 scenarios were subsequently rated by four blinded raters (two in the UK, two in Canada) using the checklists and GRS. The actual and predicted reliability of the tools was calculated for different numbers of raters using the intraclass correlation coefficient (ICC) and the Spearman-Brown prophecy formula. Average measures ICCs ranged from 'substantial' to 'near perfect' (P ≤ 0.001). The reliability of the checklists and the GRS was similar. Single measures ICCs showed more variability than average measures ICC. At least two raters would be required to achieve acceptable reliability. We have established the reliability of a GRS to assess the management of simulated crisis scenarios in pediatric anesthesia, and this tool is feasible within the setting of a research study. The global rating scale allows raters to make a judgement regarding a participant's readiness for independent practice. These tools may be used in the future research examining simulation-based assessment. © 2013 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.
2005-01-01
This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, A.G.
This plan establishes the integrated configuration management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford site technical baseline.
NASA Astrophysics Data System (ADS)
Launch vehicle propulsion system reliability considerations during the design and verification processes are discussed. The tools available for predicting and minimizing anomalies or failure modes are described and objectives for validating advanced launch system propulsion reliability are listed. Methods for ensuring vehicle/propulsion system interface reliability are examined and improvements in the propulsion system development process are suggested to improve reliability in launch operations. Also, possible approaches to streamline the specification and procurement process are given. It is suggested that government and industry should define reliability program requirements and manage production and operations activities in a manner that provides control over reliability drivers. Also, it is recommended that sufficient funds should be invested in design, development, test, and evaluation processes to ensure that reliability is not inappropriately subordinated to other management considerations.
NASA Astrophysics Data System (ADS)
Sipos, Roland; Govi, Giacomo; Franzoni, Giovanni; Di Guida, Salvatore; Pfeiffer, Andreas
2017-10-01
The CMS experiment at CERN LHC has a dedicated infrastructure to handle the alignment and calibration data. This infrastructure is composed of several services, which take on various data management tasks required for the consumption of the non-event data (also called as condition data) in the experiment activities. The criticality of these tasks imposes tights requirements for the availability and the reliability of the services executing them. In this scope, a comprehensive monitoring and alarm generating system has been developed. The system has been implemented based on the Nagios open source industry standard for monitoring and alerting services, and monitors the database back-end, the hosting nodes and key heart-beat functionalities for all the services involved. This paper describes the design, implementation and operational experience with the monitoring system developed and deployed at CMS in 2016.
Thriving on Chaos: The Development of a Surgical Information System
Olund, Steven R.
1988-01-01
Hospitals present unique challenges to the computer industry, generating a greater quantity and variety of data than nearly any other enterprise. This is complicated by the fact that a hospital is not one homogenous organization, but a bundle of semi-independent groups with unique data requirements. Therefore hospital information systems must be fast, flexible, reliable, easy to use and maintain, and cost-effective. The Surgical Information System at Rush Presbyterian-St. Luke's Medical Center, Chicago is such system. It uses a Sequent Balance 21000 multi-processor superminicomputer, running industry standard tools such as the Unix operating system, a 4th generation programming language (4GL), and Structured Query Language (SQL) relational database management software. This treatise illustrates a comprehensive yet generic approach which can be applied to almost any clinical situation where access to patient data is required by a variety of medical professionals.
Protection of electronic health records (EHRs) in cloud.
Alabdulatif, Abdulatif; Khalil, Ibrahim; Mai, Vu
2013-01-01
EHR technology has come into widespread use and has attracted attention in healthcare institutions as well as in research. Cloud services are used to build efficient EHR systems and obtain the greatest benefits of EHR implementation. Many issues relating to building an ideal EHR system in the cloud, especially the tradeoff between flexibility and security, have recently surfaced. The privacy of patient records in cloud platforms is still a point of contention. In this research, we are going to improve the management of access control by restricting participants' access through the use of distinct encrypted parameters for each participant in the cloud-based database. Also, we implement and improve an existing secure index search algorithm to enhance the efficiency of information control and flow through a cloud-based EHR system. At the final stage, we contribute to the design of reliable, flexible and secure access control, enabling quick access to EHR information.
NASA Technical Reports Server (NTRS)
Mckendry, M. S.
1985-01-01
The notion of 'atomic actions' has been considered in recent work on data integrity and reliability. It has been found that the standard database operations of 'read' and 'write' carry with them severe performance limitations. For this reason, systems are now being designed in which actions operate on 'objects' through operations with more-or-less arbitrary semantics. An object (i.e., an instance of an abstract data type) comprises data, a set of operations (procedures) to manipulate the data, and a set of invariants. An 'action' is a unit of work. It appears to be primitive to its surrounding environment, and 'atomic' to other actions. Attention is given to the conventional model of nested actions, ordering requirements, the maximum possible visibility (full visibility) for items which must be controlled by ordering constraints, item management paradigms, and requirements for blocking mechanisms which provide the required visibility.
PylotDB - A Database Management, Graphing, and Analysis Tool Written in Python
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel W.
2012-01-04
PylotDB, written completely in Python, provides a user interface (UI) with which to interact with, analyze, graph data from, and manage open source databases such as MySQL. The UI mitigates the user having to know in-depth knowledge of the database application programming interface (API). PylotDB allows the user to generate various kinds of plots from user-selected data; generate statistical information on text as well as numerical fields; backup and restore databases; compare database tables across different databases as well as across different servers; extract information from any field to create new fields; generate, edit, and delete databases, tables, and fields;more » generate or read into a table CSV data; and similar operations. Since much of the database information is brought under control of the Python computer language, PylotDB is not intended for huge databases for which MySQL and Oracle, for example, are better suited. PylotDB is better suited for smaller databases that might be typically needed in a small research group situation. PylotDB can also be used as a learning tool for database applications in general.« less
Xenbase, the Xenopus model organism database; new virtualized system, data types and genomes
Karpinka, J. Brad; Fortriede, Joshua D.; Burns, Kevin A.; James-Zorn, Christina; Ponferrada, Virgilio G.; Lee, Jacqueline; Karimi, Kamran; Zorn, Aaron M.; Vize, Peter D.
2015-01-01
Xenbase (http://www.xenbase.org), the Xenopus frog model organism database, integrates a wide variety of data from this biomedical model genus. Two closely related species are represented: the allotetraploid Xenopus laevis that is widely used for microinjection and tissue explant-based protocols, and the diploid Xenopus tropicalis which is used for genetics and gene targeting. The two species are extremely similar and protocols, reagents and results from each species are often interchangeable. Xenbase imports, indexes, curates and manages data from both species; all of which are mapped via unique IDs and can be queried in either a species-specific or species agnostic manner. All our services have now migrated to a private cloud to achieve better performance and reliability. We have added new content, including providing full support for morpholino reagents, used to inhibit mRNA translation or splicing and binding to regulatory microRNAs. New genomes assembled by the JGI for both species and are displayed in Gbrowse and are also available for searches using BLAST. Researchers can easily navigate from genome content to gene page reports, literature, experimental reagents and many other features using hyperlinks. Xenbase has also greatly expanded image content for figures published in papers describing Xenopus research via PubMedCentral. PMID:25313157
Design and Implementation of a Metadata-rich File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ames, S; Gokhale, M B; Maltzahn, C
2010-01-19
Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less
Bogialli, Sara; Bortolini, Claudio; Di Gangi, Iole Maria; Di Gregorio, Federica Nigro; Lucentini, Luca; Favaro, Gabriella; Pastore, Paolo
2017-08-01
A comprehensive risk management on human exposure to cyanotoxins, whose production is actually unpredictable, is limited by reliable analytical tools for monitoring as many toxic algal metabolites as possible. Two analytical approaches based on a LC-QTOF system for target analysis and suspect screening of cyanotoxins in freshwater were presented. A database with 369 compounds belonging to cyanobacterial metabolites was developed and used for a retrospective data analysis based on high resolution mass spectrometry (HRMS). HRMS fragmentation of the suspect cyanotoxin precursor ions was subsequently performed for correctly identifying the specific variants. Alternatively, an automatic tandem HRMS analysis tailored for cyanotoxins was performed in a single chromatographic run, using the developed database as a preferred precursor ions list. Twenty-five extracts of surface and drinking waters contaminated by cyanobacteria were processed. The identification of seven uncommon microcystins (M(O)R, MC-FR, MSer 7 -YR, D-Asp 3 MSer 7 -LR, MSer 7 -LR, dmAdda-LR and dmAdda-YR) and 6 anabaenopeptins (A, B, F, MM850, MM864, oscyllamide Y) was reported. Several isobaric variants, fully separated by chromatography, were pointed out. The developed methods are proposed to be used by environmental and health agencies for strengthening the surveillance monitoring of cyanotoxins in water. Copyright © 2017 Elsevier B.V. All rights reserved.
Droit, Arnaud; Hunter, Joanna M; Rouleau, Michèle; Ethier, Chantal; Picard-Cloutier, Aude; Bourgais, David; Poirier, Guy G
2007-01-01
Background In the "post-genome" era, mass spectrometry (MS) has become an important method for the analysis of proteins and the rapid advancement of this technique, in combination with other proteomics methods, results in an increasing amount of proteome data. This data must be archived and analysed using specialized bioinformatics tools. Description We herein describe "PARPs database," a data analysis and management pipeline for liquid chromatography tandem mass spectrometry (LC-MS/MS) proteomics. PARPs database is a web-based tool whose features include experiment annotation, protein database searching, protein sequence management, as well as data-mining of the peptides and proteins identified. Conclusion Using this pipeline, we have successfully identified several interactions of biological significance between PARP-1 and other proteins, namely RFC-1, 2, 3, 4 and 5. PMID:18093328
Architecture for biomedical multimedia information delivery on the World Wide Web
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
1997-10-01
Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.
Choosing the Right Database Management Program.
ERIC Educational Resources Information Center
Vockell, Edward L.; Kopenec, Donald
1989-01-01
Provides a comparison of four database management programs commonly used in schools: AppleWorks, the DOS 3.3 and ProDOS versions of PFS, and MECC's Data Handler. Topics discussed include information storage, spelling checkers, editing functions, search strategies, graphs, printout formats, library applications, and HyperCard. (LRW)
ERIC Educational Resources Information Center
Mills, Myron L.
1988-01-01
A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)
Integration of Information Retrieval and Database Management Systems.
ERIC Educational Resources Information Center
Deogun, Jitender S.; Raghavan, Vijay V.
1988-01-01
Discusses the motivation for integrating information retrieval and database management systems, and proposes a probabilistic retrieval model in which records in a file may be composed of attributes (formatted data items) and descriptors (content indicators). The details and resolutions of difficulties involved in integrating such systems are…
Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M
2005-01-01
As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.
EasyKSORD: A Platform of Keyword Search Over Relational Databases
NASA Astrophysics Data System (ADS)
Peng, Zhaohui; Li, Jing; Wang, Shan
Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.
Advanced Query Formulation in Deductive Databases.
ERIC Educational Resources Information Center
Niemi, Timo; Jarvelin, Kalervo
1992-01-01
Discusses deductive databases and database management systems (DBMS) and introduces a framework for advanced query formulation for end users. Recursive processing is described, a sample extensional database is presented, query types are explained, and criteria for advanced query formulation from the end user's viewpoint are examined. (31…
Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang
2010-04-01
To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2014-09-30
The Maui Smart Grid Project (MSGP) is under the leadership of the Hawaii Natural Energy Institute (HNEI) of the University of Hawaii at Manoa. The project team includes Maui Electric Company, Ltd. (MECO), Hawaiian Electric Company, Inc. (HECO), Sentech (a division of SRA International, Inc.), Silver Spring Networks (SSN), Alstom Grid, Maui Economic Development Board (MEDB), University of Hawaii-Maui College (UHMC), and the County of Maui. MSGP was supported by the U.S. Department of Energy (DOE) under Cooperative Agreement Number DE-FC26-08NT02871, with approximately 50% co-funding supplied by MECO. The project was designed to develop and demonstrate an integrated monitoring, communications,more » database, applications, and decision support solution that aggregates renewable energy (RE), other distributed generation (DG), energy storage, and demand response technologies in a distribution system to achieve both distribution and transmission-level benefits. The application of these new technologies and procedures will increase MECO’s visibility into system conditions, with the expected benefits of enabling more renewable energy resources to be integrated into the grid, improving service quality, increasing overall reliability of the power system, and ultimately reducing costs to both MECO and its customers.« less
NSI customer service representatives and user support office: NASA Science Internet
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Science Internet, (NSI) was established in 1987 to provide NASA's Offices of Space Science and Applications (OSSA) missions with transparent wide-area data connectivity to NASA's researchers, computational resources, and databases. The NSI Office at NASA/Ames Research Center has the lead responsibility for implementing a total, open networking program to serve the OSSA community. NSI is a full-service communications provider whose services include science network planning, network engineering, applications development, network operations, and network information center/user support services. NSI's mission is to provide reliable high-speed communications to the NASA science community. To this end, the NSI Office manages and operates the NASA Science Internet, a multiprotocol network currently supporting both DECnet and TCP/IP protocols. NSI utilizes state-of-the-art network technology to meet its customers' requirements. THe NASA Science Internet interconnects with other national networks including the National Science Foundation's NSFNET, the Department of Energy's ESnet, and the Department of Defense's MILNET. NSI also has international connections to Japan, Australia, New Zealand, Chile, and several European countries. NSI cooperates with other government agencies as well as academic and commercial organizations to implement networking technologies which foster interoperability, improve reliability and performance, increase security and control, and expedite migration to the OSI protocols.
Teaching nurses teamwork: Integrative review of competency-based team training in nursing education.
Barton, Glenn; Bruce, Anne; Schreiber, Rita
2017-12-20
Widespread demands for high reliability healthcare teamwork have given rise to many educational initiatives aimed at building team competence. Most effort has focused on interprofessional team training however; Registered Nursing teams comprise the largest human resource delivering direct patient care in hospitals. Nurses also influence many other health team outcomes, yet little is known about the team training curricula they receive, and furthermore what specific factors help translate teamwork competency to nursing practice. The aim of this review is to critically analyse empirical published work reporting on teamwork education interventions in nursing, and identify key educational considerations enabling teamwork competency in this group. CINAHL, Web of Science, Academic Search Complete, and ERIC databases were searched and detailed inclusion-exclusion criteria applied. Studies (n = 19) were selected and evaluated using established qualitative-quantitative appraisal tools and a systematic constant comparative approach. Nursing teamwork knowledge is rooted in High Reliability Teams theory and Crew or Crisis Resource Management sources. Constructivist pedagogy is used to teach, practice, and refine teamwork competency. Nursing teamwork assessment is complex; involving integrated yet individualized determinations of knowledge, skills, and attitudes. Future initiatives need consider frontline leadership, supportive followership and skilled communication emphasis. Collective stakeholder support is required to translate teamwork competency into nursing practice. Copyright © 2017 Elsevier Ltd. All rights reserved.
Incorporating travel-time reliability into the congestion management process : a primer.
DOT National Transportation Integrated Search
2015-02-01
This primer explains the value of incorporating travel-time reliability into the Congestion Management Process (CMP) : and identifies the most current tools available to assist with this effort. It draws from applied research and best practices : fro...
NASA Astrophysics Data System (ADS)
Garcia-Mayordomo, Julian; Martin-Banda, Raquel; Insua-Arevalo, Juan Miguel; Alvarez-Gomez, Jose Antonio; Martinez-Diaz, Jose Jesus
2017-04-01
Since the Quaternary Active Faults Database of Iberia (QAFI) was released in February 2012 a number of studies aimed at producing seismic hazard assessments have made use of it. We will present a summary of the shortcomings and advantages that were faced when QAFI was considered in different seismic hazard studies. These include the production of the new official seismic hazard map of Spain, performed in the view of the foreseen adoption of Eurocode-8 throughout 2017. The QAFI database was considered as a complementary source of information for designing the seismogenic source-zone models used in the calculations, and particularly for the estimation of maximum magnitude distribution in each zone, as well as for assigning the predominant rupture mechanism based on style of faulting. We will also review the different results obtained by other studies that considered QAFI faults as independent seismogenic-sources in opposition to source-zones, revealing, on one hand, the crucial importance of data-reliability and, on the other, the very much influence that ground motion attenuation models have on the actual impact of fault-sources on hazard results. Finally, we will present briefly the updated version of the database (QAFI v.3, 2015), which includes an original scheme for evaluating the reliability of fault seismic parameters specifically devised to facilitate decision-making to seismic hazard practitioners.
A survey of commercial object-oriented database management systems
NASA Technical Reports Server (NTRS)
Atkins, John
1992-01-01
The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.
Building an Ontology-driven Database for Clinical Immune Research
Ma, Jingming
2006-01-01
The clinical researches of immune response usually generate a huge amount of biomedical testing data over a certain period of time. The user-friendly data management systems based on the relational database will help immunologists/clinicians to fully manage the data. On the other hand, the same biological assays such as ELISPOT and flow cytometric assays are involved in immunological experiments no matter of different study purposes. The reuse of biological knowledge is one of driving forces behind this ontology-driven data management. Therefore, an ontology-driven database will help to handle different clinical immune researches and help immunologists/clinicians easily understand the immunological data from each other. We will discuss some outlines for building an ontology-driven data management for clinical immune researches (ODMim). PMID:17238637
Intrathecal Drug Delivery Systems for Cancer Pain: A Health Technology Assessment
2016-01-01
Background Intrathecal drug delivery systems can be used to manage refractory or persistent cancer pain. We investigated the benefits, harms, cost-effectiveness, and budget impact of these systems compared with current standards of care for adult patients with chronic pain due owing to cancer. Methods We searched Ovid MEDLINE, Ovid Embase, the Cochrane Library databases, National Health Service's Economic Evaluation Database, and Tufts Cost-Effectiveness Analysis Registry from January 1994 to April 2014 for evidence of effectiveness, harms, and cost-effectiveness. We used existing systematic reviews that had employed reliable search and screen methods and searched for studies published after the search date reported in the latest systematic review to identify studies. Two reviewers screened records and assessed study validity. The cost burden of publicly funding intrathecal drug delivery systems for cancer pain was estimated for a 5-year timeframe using a combination of published literature, information from the device manufacturer, administrative data, and expert opinion for the inputs. Results We included one randomized trial that examined effectiveness and harms, and one case series that reported an eligible economic evaluation. We found very low quality evidence that intrathecal drug delivery systems added to comprehensive pain management reduce overall drug toxicity; no significant reduction in pain scores was observed. Weak conclusions from economic evidence suggested that intrathecal drug delivery systems had the potential to be more cost-effective than high-cost oral therapy if administered for 7 months or longer. The cost burden of publicly funding this therapy is estimated to be $100,000 in the first year, increasing to $500,000 by the fifth year. Conclusions Current evidence could not establish the benefit, harm, or cost-effectiveness of intrathecal drug delivery systems compared with current standards of care for managing refractory cancer pain in adults. Publicly funding intrathecal drug delivery systems for cancer pain would result in a budget impact of several hundred thousand dollars per year. PMID:27026796
NASA Astrophysics Data System (ADS)
Mogaji, Kehinde Anthony; Lim, Hwee San
2018-06-01
The application of a GIS - based Dempster - Shafer data driven model named as evidential belief function EBF- methodology to groundwater potential conditioning factors (GPCFs) derived from geophysical and hydrogeological data sets for assessing groundwater potentiality was presented in this study. The proposed method's efficacy in managing degree of uncertainty in spatial predictive models motivated this research. The method procedural approaches entail firstly, the database containing groundwater data records (bore wells location inventory, hydrogeological data record, etc.) and geophysical measurement data construction. From the database, different influencing groundwater occurrence factors, namely aquifer layer thickness, aquifer layer resistivity, overburden material resistivity, overburden material thickness, aquifer hydraulic conductivity and aquifer transmissivity were extracted and prepared. Further, the bore well location inventories were partitioned randomly into a ratio of 70% (19 wells) for model training and 30% (9 wells) for model testing. The synthesized of the GPCFs via applying the DS - EBF model algorithms produced the groundwater productivity potential index (GPPI) map which demarcated the area into low - medium, medium, medium - high and high potential zones. The analyzed percentage degree of uncertainty for the predicted lows potential zones classes and mediums/highs potential zones classes are >10% and <10%, respectively. The DS theory model-based GPPI map's validation through ROC approach established prediction rate accuracy of 88.8%. Successively, the determined transverse resistance (TR) values in the range of 1280 and 30,000 Ω my for the area geoelectrically delineated aquifer units of the predicted potential zones through Dar - Zarrouk Parameter analysis quantitatively confirm the DS theory modeling prediction results. This research results have expand the capability of DS - EBF model in predictive modeling by effective uncertainty management. Thus, the produced map could form part of decision support system reliable to be used by local authorities for groundwater exploitation and management in the area.
Intrathecal Drug Delivery Systems for Cancer Pain: A Health Technology Assessment.
2016-01-01
Intrathecal drug delivery systems can be used to manage refractory or persistent cancer pain. We investigated the benefits, harms, cost-effectiveness, and budget impact of these systems compared with current standards of care for adult patients with chronic pain due owing to cancer. We searched Ovid MEDLINE, Ovid Embase, the Cochrane Library databases, National Health Service's Economic Evaluation Database, and Tufts Cost-Effectiveness Analysis Registry from January 1994 to April 2014 for evidence of effectiveness, harms, and cost-effectiveness. We used existing systematic reviews that had employed reliable search and screen methods and searched for studies published after the search date reported in the latest systematic review to identify studies. Two reviewers screened records and assessed study validity. The cost burden of publicly funding intrathecal drug delivery systems for cancer pain was estimated for a 5-year timeframe using a combination of published literature, information from the device manufacturer, administrative data, and expert opinion for the inputs. We included one randomized trial that examined effectiveness and harms, and one case series that reported an eligible economic evaluation. We found very low quality evidence that intrathecal drug delivery systems added to comprehensive pain management reduce overall drug toxicity; no significant reduction in pain scores was observed. Weak conclusions from economic evidence suggested that intrathecal drug delivery systems had the potential to be more cost-effective than high-cost oral therapy if administered for 7 months or longer. The cost burden of publicly funding this therapy is estimated to be $100,000 in the first year, increasing to $500,000 by the fifth year. Current evidence could not establish the benefit, harm, or cost-effectiveness of intrathecal drug delivery systems compared with current standards of care for managing refractory cancer pain in adults. Publicly funding intrathecal drug delivery systems for cancer pain would result in a budget impact of several hundred thousand dollars per year.
Paksi, Borbala; Demetrovics, Zsolt; Magi, Anna; Felvinczi, Katalin
2017-06-01
This paper introduces the methods and methodological findings of the National Survey on Addiction Problems in Hungary (NSAPH 2015). Use patterns of smoking, alcohol use and other psychoactive substances were measured as well as that of certain behavioural addictions (problematic gambling - PGSI, DSM-V, eating disorders - SCOFF, problematic internet use - PIUQ, problematic on-line gaming - POGO, problematic social media use - FAS, exercise addictions - EAI-HU, work addiction - BWAS, compulsive buying - CBS). The paper describes the applied measurement techniques, sample selection, recruitment of respondents and the data collection strategy as well. Methodological results of the survey including reliability and validity of the measures are reported. The NSAPH 2015 research was carried out on a nationally representative sample of the Hungarian adult population aged 16-64 yrs (gross sample 2477, net sample 2274 persons) with the age group of 18-34 being overrepresented. Statistical analysis of the weight-distribution suggests that weighting did not create any artificial distortion in the database leaving the representativeness of the sample unaffected. The size of the weighted sample of the 18-64 years old adult population is 1490 persons. The extent of the theoretical margin of error in the weighted sample is ±2,5%, at a reliability level of 95% which is in line with the original data collection plans. Based on the analysis of reliability and the extent of errors beyond sampling within the context of the database we conclude that inconsistencies create relatively minor distortions in cumulative prevalence rates; consequently the database makes possible the reliable estimation of risk factors related to different substance use behaviours. The reliability indexes of measurements used for prevalence estimates of behavioural addictions proved to be appropriate, though the psychometric features in some cases suggest the presence of redundant items. The comparison of parameters of errors beyond sample selection in the current and previous data collections indicates that trend estimates and their interpretation requires outstanding attention and in some cases even correction procedures might become necessary.
Do Librarians Really Do That? Or Providing Custom, Fee-Based Services.
ERIC Educational Resources Information Center
Whitmore, Susan; Heekin, Janet
This paper describes some of the fee-based, custom services provided by National Institutes of Health (NIH) Library to NIH staff, including knowledge management, clinical liaisons, specialized database searching, bibliographic database development, Web resource guide development, and journal management. The first section discusses selecting the…
Effects of Long-term Soil and Crop Management on Soil Hydraulic Properties for Claypan Soils
USDA-ARS?s Scientific Manuscript database
Regional and national soil maps and associated databases of soil properties have been developed to help land managers make decisions based on soil characteristics. Hydrologic modelers also utilize soil hydraulic properties provided in these databases, in which soil characterization is based on avera...
A Database Management Assessment Instrument
ERIC Educational Resources Information Center
Landry, Jeffrey P.; Pardue, J. Harold; Daigle, Roy; Longenecker, Herbert E., Jr.
2013-01-01
This paper describes an instrument designed for assessing learning outcomes in data management. In addition to assessment of student learning and ABET outcomes, we have also found the instrument to be effective for determining database placement of incoming information systems (IS) graduate students. Each of these three uses is discussed in this…
Managing the world’s largest and complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that are comparable across the region. To meet such a need, we developed a hierarchi...
Two Student Self-Management Techniques Applied to Data-Based Program Modification.
ERIC Educational Resources Information Center
Wesson, Caren
Two student self-management techniques, student charting and student selection of instructional activities, were applied to ongoing data-based program modification. Forty-two elementary school resource room students were assigned randomly (within teacher) to one of three treatment conditions: Teacher Chart-Teacher Select Instructional Activities…