Sample records for data quality

  1. The creation, management, and use of data quality information for life cycle assessment.

    PubMed

    Edelen, Ashley; Ingwersen, Wesley W

    2018-04-01

    Despite growing access to data, questions of "best fit" data and the appropriate use of results in supporting decision making still plague the life cycle assessment (LCA) community. This discussion paper addresses revisions to assessing data quality captured in a new US Environmental Protection Agency guidance document as well as additional recommendations on data quality creation, management, and use in LCA databases and studies. Existing data quality systems and approaches in LCA were reviewed and tested. The evaluations resulted in a revision to a commonly used pedigree matrix, for which flow and process level data quality indicators are described, more clarity for scoring criteria, and further guidance on interpretation are given. Increased training for practitioners on data quality application and its limits are recommended. A multi-faceted approach to data quality assessment utilizing the pedigree method alongside uncertainty analysis in result interpretation is recommended. A method of data quality score aggregation is proposed and recommendations for usage of data quality scores in existing data are made to enable improved use of data quality scores in LCA results interpretation. Roles for data generators, data repositories, and data users are described in LCA data quality management. Guidance is provided on using data with data quality scores from other systems alongside data with scores from the new system. The new pedigree matrix and recommended data quality aggregation procedure can now be implemented in openLCA software. Additional ways in which data quality assessment might be improved and expanded are described. Interoperability efforts in LCA data should focus on descriptors to enable user scoring of data quality rather than translation of existing scores. Developing and using data quality indicators for additional dimensions of LCA data, and automation of data quality scoring through metadata extraction and comparison to goal and scope are needed.

  2. An integrated view of data quality in Earth observation

    PubMed Central

    Yang, X.; Blower, J. D.; Bastin, L.; Lush, V.; Zabala, A.; Masó, J.; Cornford, D.; Díaz, P.; Lumsden, J.

    2013-01-01

    Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot fulfil all user needs or cover all concepts of data quality. In this study, we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specifications on data quality, and propose an integrated model for data quality in the field of Earth observation (EO). We also propose a practical mechanism for applying the integrated quality information model to a large number of datasets through metadata inheritance. While our data quality management approach is in the domain of EO, we believe that the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research. PMID:23230156

  3. An integrated view of data quality in Earth observation.

    PubMed

    Yang, X; Blower, J D; Bastin, L; Lush, V; Zabala, A; Masó, J; Cornford, D; Díaz, P; Lumsden, J

    2013-01-28

    Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot fulfil all user needs or cover all concepts of data quality. In this study, we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specifications on data quality, and propose an integrated model for data quality in the field of Earth observation (EO). We also propose a practical mechanism for applying the integrated quality information model to a large number of datasets through metadata inheritance. While our data quality management approach is in the domain of EO, we believe that the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.

  4. Principles and Practices for Quality Assurance and Quality Control

    USGS Publications Warehouse

    Jones, Berwyn E.

    1999-01-01

    Quality assurance and quality control are vital parts of highway runoff water-quality monitoring projects. To be effective, project quality assurance must address all aspects of the project, including project management responsibilities and resources, data quality objectives, sampling and analysis plans, data-collection protocols, data quality-control plans, data-assessment procedures and requirements, and project outputs. Quality control ensures that the data quality objectives are achieved as planned. The historical development and current state of the art of quality assurance and quality control concepts described in this report can be applied to evaluation of data from prior projects.

  5. A Transparent and Transferable Framework for Tracking Quality Information in Large Datasets

    PubMed Central

    Smith, Derek E.; Metzger, Stefan; Taylor, Jeffrey R.

    2014-01-01

    The ability to evaluate the validity of data is essential to any investigation, and manual “eyes on” assessments of data quality have dominated in the past. Yet, as the size of collected data continues to increase, so does the effort required to assess their quality. This challenge is of particular concern for networks that automate their data collection, and has resulted in the automation of many quality assurance and quality control analyses. Unfortunately, the interpretation of the resulting data quality flags can become quite challenging with large data sets. We have developed a framework to summarize data quality information and facilitate interpretation by the user. Our framework consists of first compiling data quality information and then presenting it through 2 separate mechanisms; a quality report and a quality summary. The quality report presents the results of specific quality analyses as they relate to individual observations, while the quality summary takes a spatial or temporal aggregate of each quality analysis and provides a summary of the results. Included in the quality summary is a final quality flag, which further condenses data quality information to assess whether a data product is valid or not. This framework has the added flexibility to allow “eyes on” information on data quality to be incorporated for many data types. Furthermore, this framework can aid problem tracking and resolution, should sensor or system malfunctions arise. PMID:25379884

  6. A review of data quality assessment methods for public health information systems.

    PubMed

    Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

    2014-05-14

    High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users' concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process.

  7. A Review of Data Quality Assessment Methods for Public Health Information Systems

    PubMed Central

    Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

    2014-01-01

    High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450

  8. Improving data quality in the linked open data: a survey

    NASA Astrophysics Data System (ADS)

    Hadhiatma, A.

    2018-03-01

    The Linked Open Data (LOD) is “web of data”, a different paradigm from “web of document” commonly used today. However, the huge LOD still suffers from data quality problems such as completeness, consistency, and accuracy. Data quality problems relate to designing effective methods both to manage and to retrieve information at various data quality levels. Based on review from papers and journals, addressing data quality requires some standards functioning to (1) identification of data quality problems, (2) assessment of data quality for a given context, and (3) correction of data quality problems. However, mostly the methods and strategies dealing with the LOD data quality were not as an integrative approach. Hence, based on those standards and an integrative approach, there are opportunities to improve the LOD data quality in the term of incompleteness, inaccuracy and inconsistency, considering to its schema and ontology, namely ontology refinement. Moreover, the term of the ontology refinement means that it copes not only to improve data quality but also to enrich the LOD. Therefore, it needs (1) a standard for data quality assessment and evaluation which is more appropriate to the LOD; (2) a framework of methods based on statistical relational learning that can improve the correction of data quality problems as well as enrich the LOD.

  9. Methods for assessing the quality of data in public health information systems: a critical review.

    PubMed

    Chen, Hong; Yu, Ping; Hailey, David; Wang, Ning

    2014-01-01

    The quality of data in public health information systems can be ensured by effective data quality assessment. In order to conduct effective data quality assessment, measurable data attributes have to be precisely defined. Then reliable and valid measurement methods for data attributes have to be used to measure each attribute. We conducted a systematic review of data quality assessment methods for public health using major databases and well-known institutional websites. 35 studies were eligible for inclusion in the study. A total of 49 attributes of data quality were identified from the literature. Completeness, accuracy and timeliness were the three most frequently assessed attributes of data quality. Most studies directly examined data values. This is complemented by exploring either data users' perception or documentation quality. However, there are limitations of current data quality assessment methods: a lack of consensus on attributes measured; inconsistent definition of the data quality attributes; a lack of mixed methods for assessing data quality; and inadequate attention to reliability and validity. Removal of these limitations is an opportunity for further improvement.

  10. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research

    PubMed Central

    Weng, Chunhua

    2013-01-01

    Objective To review the methods and dimensions of data quality assessment in the context of electronic health record (EHR) data reuse for research. Materials and methods A review of the clinical research literature discussing data quality assessment methodology for EHR data was performed. Using an iterative process, the aspects of data quality being measured were abstracted and categorized, as well as the methods of assessment used. Results Five dimensions of data quality were identified, which are completeness, correctness, concordance, plausibility, and currency, and seven broad categories of data quality assessment methods: comparison with gold standards, data element agreement, data source agreement, distribution comparison, validity checks, log review, and element presence. Discussion Examination of the methods by which clinical researchers have investigated the quality and suitability of EHR data for research shows that there are fundamental features of data quality, which may be difficult to measure, as well as proxy dimensions. Researchers interested in the reuse of EHR data for clinical research are recommended to consider the adoption of a consistent taxonomy of EHR data quality, to remain aware of the task-dependence of data quality, to integrate work on data quality assessment from other fields, and to adopt systematic, empirically driven, statistically based methods of data quality assessment. Conclusion There is currently little consistency or potential generalizability in the methods used to assess EHR data quality. If the reuse of EHR data for clinical research is to become accepted, researchers should adopt validated, systematic methods of EHR data quality assessment. PMID:22733976

  11. Measuring management's perspective of data quality in Pakistan's Tuberculosis control programme: a test-based approach to identify data quality dimensions.

    PubMed

    Ali, Syed Mustafa; Anjum, Naveed; Kamel Boulos, Maged N; Ishaq, Muhammad; Aamir, Javariya; Haider, Ghulam Rasool

    2018-01-16

    Data quality is core theme of programme's performance assessment and many organizations do not have any data quality improvement strategy, wherein data quality dimensions and data quality assessment framework are important constituents. As there is limited published research about the data quality specifics that are relevant to the context of Pakistan's Tuberculosis control programme, this study aims at identifying the applicable data quality dimensions by using the 'fitness-for-purpose' perspective. Forty-two respondents pooled a total of 473 years of professional experience, out of which 223 years (47%) were in TB control related programmes. Based on the responses against 11 practical cases, adopted from the routine recording and reporting system of Pakistan's TB control programme (real identities of patient were masked), completeness, accuracy, consistency, vagueness, uniqueness and timeliness are the applicable data quality dimensions relevant to the programme's context, i.e. work settings and field of practice. Based on a 'fitness-for-purpose' approach to data quality, this study used a test-based approach to measure management's perspective and identified data quality dimensions pertinent to the programme and country specific requirements. Implementation of a data quality improvement strategy and achieving enhanced data quality would greatly help organizations in promoting data use for informed decision making.

  12. A comprehensive method for GNSS data quality determination to improve ionospheric data analysis.

    PubMed

    Kim, Minchan; Seo, Jiwon; Lee, Jiyun

    2014-08-14

    Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis.

  13. A Comprehensive Method for GNSS Data Quality Determination to Improve Ionospheric Data Analysis

    PubMed Central

    Kim, Minchan; Seo, Jiwon; Lee, Jiyun

    2014-01-01

    Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis. PMID:25196005

  14. Transparent Reporting of Data Quality in Distributed Data Networks

    PubMed Central

    Kahn, Michael G.; Brown, Jeffrey S.; Chun, Alein T.; Davidson, Bruce N.; Meeker, Daniella; Ryan, Patrick B.; Schilling, Lisa M.; Weiskopf, Nicole G.; Williams, Andrew E.; Zozus, Meredith Nahm

    2015-01-01

    Introduction: Poor data quality can be a serious threat to the validity and generalizability of clinical research findings. The growing availability of electronic administrative and clinical data is accompanied by a growing concern about the quality of these data for observational research and other analytic purposes. Currently, there are no widely accepted guidelines for reporting quality results that would enable investigators and consumers to independently determine if a data source is fit for use to support analytic inferences and reliable evidence generation. Model and Methods: We developed a conceptual model that captures the flow of data from data originator across successive data stewards and finally to the data consumer. This “data lifecycle” model illustrates how data quality issues can result in data being returned back to previous data custodians. We highlight the potential risks of poor data quality on clinical practice and research results. Because of the need to ensure transparent reporting of a data quality issues, we created a unifying data-quality reporting framework and a complementary set of 20 data-quality reporting recommendations for studies that use observational clinical and administrative data for secondary data analysis. We obtained stakeholder input on the perceived value of each recommendation by soliciting public comments via two face-to-face meetings of informatics and comparative-effectiveness investigators, through multiple public webinars targeted to the health services research community, and with an open access online wiki. Recommendations: Our recommendations propose reporting on both general and analysis-specific data quality features. The goals of these recommendations are to improve the reporting of data quality measures for studies that use observational clinical and administrative data, to ensure transparency and consistency in computing data quality measures, and to facilitate best practices and trust in the new clinical discoveries based on secondary use of observational data. PMID:25992385

  15. A Quality Screening Service for Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Olsen, Edward; Fox, Peter; Vollmer, Bruce; Wolfe, Robert; Samadi, Shahin

    2010-01-01

    NASA provides a wide variety of Earth-observing satellite data products to a diverse community. These data are annotated with quality information in a variety of ways, with the result that many users struggle to understand how to properly account for quality when dealing with satellite data. To address this issue, a Data Quality Screening Service (DQSS) is being implemented for a number of datasets. The DQSS will enable users to obtain data files in which low-quality pixels have been filtered out, based either on quality criteria recommended by the science team or on the user s particular quality criteria. The objective is to increase proper utilization of this critical quality data in science data analysis of satellite data products.

  16. Methods for examining data quality in healthcare integrated data repositories.

    PubMed

    Huser, Vojtech; Kahn, Michael G; Brown, Jeffrey S; Gouripeddi, Ramkiran

    2018-01-01

    This paper summarizes content of the workshop focused on data quality. The first speaker (VH) described data quality infrastructure and data quality evaluation methods currently in place within the Observational Data Science and Informatics (OHDSI) consortium. The speaker described in detail a data quality tool called Achilles Heel and latest development for extending this tool. Interim results of an ongoing Data Quality study within the OHDSI consortium were also presented. The second speaker (MK) described lessons learned and new data quality checks developed by the PEDsNet pediatric research network. The last two speakers (JB, RG) described tools developed by the Sentinel Initiative and University of Utah's service oriented framework. The workshop discussed at the end and throughout how data quality assessment can be advanced by combining best features of each network.

  17. 40 CFR 51.320 - Annual air quality data report.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Annual air quality data report. 51.320... REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Reports Air Quality Data Reporting § 51.320 Annual air quality data report. The requirements for reporting air quality data collected...

  18. 40 CFR 51.320 - Annual air quality data report.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Annual air quality data report. 51.320... REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Reports Air Quality Data Reporting § 51.320 Annual air quality data report. The requirements for reporting air quality data collected...

  19. 40 CFR 51.320 - Annual air quality data report.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Annual air quality data report. 51.320... REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Reports Air Quality Data Reporting § 51.320 Annual air quality data report. The requirements for reporting air quality data collected...

  20. 40 CFR 51.320 - Annual air quality data report.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Reports Air Quality Data Reporting § 51.320 Annual air quality data report. The requirements for reporting air quality data collected... 40 Protection of Environment 2 2010-07-01 2010-07-01 false Annual air quality data report. 51.320...

  1. 40 CFR 51.320 - Annual air quality data report.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Reports Air Quality Data Reporting § 51.320 Annual air quality data report. The requirements for reporting air quality data collected... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Annual air quality data report. 51.320...

  2. Ontology Based Quality Evaluation for Spatial Data

    NASA Astrophysics Data System (ADS)

    Yılmaz, C.; Cömert, Ç.

    2015-08-01

    Many institutions will be providing data to the National Spatial Data Infrastructure (NSDI). Current technical background of the NSDI is based on syntactic web services. It is expected that this will be replaced by semantic web services. The quality of the data provided is important in terms of the decision-making process and the accuracy of transactions. Therefore, the data quality needs to be tested. This topic has been neglected in Turkey. Data quality control for NSDI may be done by private or public "data accreditation" institutions. A methodology is required for data quality evaluation. There are studies for data quality including ISO standards, academic studies and software to evaluate spatial data quality. ISO 19157 standard defines the data quality elements. Proprietary software such as, 1Spatial's 1Validate and ESRI's Data Reviewer offers quality evaluation based on their own classification of rules. Commonly, rule based approaches are used for geospatial data quality check. In this study, we look for the technical components to devise and implement a rule based approach with ontologies using free and open source software in semantic web context. Semantic web uses ontologies to deliver well-defined web resources and make them accessible to end-users and processes. We have created an ontology conforming to the geospatial data and defined some sample rules to show how to test data with respect to data quality elements including; attribute, topo-semantic and geometrical consistency using free and open source software. To test data against rules, sample GeoSPARQL queries are created, associated with specifications.

  3. Impact of the present-on-admission indicator on hospital quality measurement: experience with the Agency for Healthcare Research and Quality (AHRQ) Inpatient Quality Indicators.

    PubMed

    Glance, Laurent G; Osler, Turner M; Mukamel, Dana B; Dick, Andrew W

    2008-02-01

    The Agency for Healthcare Research and Quality (AHRQ) has constructed Inpatient Quality Indicator (IQI) mortality measures to measure hospital quality using routinely available administrative data. With the exception of California, New York State, and Wisconsin, administrative data do not include a present-on-admission (POA) indicator to distinguish between preexisting conditions and complications. The extent to which the lack of a POA indicator biases quality assessment based on the AHRQ mortality measures is unknown. To examine the impact of the POA indicator on hospital quality assessment based on the AHRQ mortality measures using enhanced administrative data from California, which includes a POA indicator. Retrospective cohort study based on 2.07 million inpatient admissions between 1998 and 2000 in the California State Inpatient Database. The AHRQ IQI software was used to calculate risk-adjusted mortality rates using either (1) routine administrative data that included all the International Classification of Diseases (ICD)-9-CM codes or (2) enhanced administrative data that included only the ICD-9-CM codes representing preexisting conditions. The inclusion of the POA indicator frequently results in changes in the quality ranking of hospitals classified as high-quality or low-quality using routine administrative data. Twenty-seven percent (stroke) to 94% (coronary artery bypass graft) of hospitals classified as high-quality using routine administrative data were reclassified as intermediate- or low-quality hospitals using the enhanced administrative data. Twenty-five percent (congestive heart failure) to 76% (percutaneous coronary intervention) of hospitals classified as low-quality hospitals using enhanced administrative data were misclassified as intermediate-quality hospitals using routine administrative data. Despite the fact that the AHRQ IQIs were primarily intended to serve as a screening tool, they are being increasingly used to publicly report hospital quality. Our findings emphasize the need to improve the "quality" of administrative data by including a POA indicator if these data are to serve as the information infrastructure for quality reporting.

  4. Groundwater-quality data from the National Water-Quality Assessment Project, January through December 2014 and select quality-control data from May 2012 through December 2014

    USGS Publications Warehouse

    Arnold, Terri L.; Bexfield, Laura M.; Musgrove, MaryLynn; Lindsey, Bruce D.; Stackelberg, Paul E.; Barlow, Jeannie R.; Desimone, Leslie A.; Kulongoski, Justin T.; Kingsbury, James A.; Ayotte, Joseph D.; Fleming, Brandon J.; Belitz, Kenneth

    2017-10-05

    Groundwater-quality data were collected from 559 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from January through December 2014. The data were collected from four types of well networks: principal aquifer study networks, which are used to assess the quality of groundwater used for public water supply; land-use study networks, which are used to assess land-use effects on shallow groundwater quality; major aquifer study networks, which are used to assess the quality of groundwater used for domestic supply; and enhanced trends networks, which are used to evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, radionuclides, and some constituents of special interest (arsenic speciation, chromium [VI] and perchlorate). These groundwater-quality data, along with data from quality-control samples, are tabulated in this report and in an associated data release.

  5. Quality assessment concept of the World Data Center for Climate and its application to CMIP5 data

    NASA Astrophysics Data System (ADS)

    Stockhause, M.; Höck, H.; Toussaint, F.; Lautenschlager, M.

    2012-08-01

    The preservation of data in a high state of quality which is suitable for interdisciplinary use is one of the most pressing and challenging current issues in long-term archiving. For high volume data such as climate model data, the data and data replica are no longer stored centrally but distributed over several local data repositories, e.g. the data of the Climate Model Intercomparison Project Phase 5 (CMIP5). The most important part of the data is to be archived, assigned a DOI, and published according to the World Data Center for Climate's (WDCC) application of the DataCite regulations. The integrated part of WDCC's data publication process, the data quality assessment, was adapted to the requirements of a federated data infrastructure. A concept of a distributed and federated quality assessment procedure was developed, in which the workload and responsibility for quality control is shared between the three primary CMIP5 data centers: Program for Climate Model Diagnosis and Intercomparison (PCMDI), British Atmospheric Data Centre (BADC), and WDCC. This distributed quality control concept, its pilot implementation for CMIP5, and first experiences are presented. The distributed quality control approach is capable of identifying data inconsistencies and to make quality results immediately available for data creators, data users and data infrastructure managers. Continuous publication of new data versions and slow data replication prevents the quality control from check completion. This together with ongoing developments of the data and metadata infrastructure requires adaptations in code and concept of the distributed quality control approach.

  6. Ambient Air Quality Data Inventory

    EPA Pesticide Factsheets

    The Office of Air and Radiation's (OAR) Ambient Air Quality Data (Current) contains ambient air pollution data collected by EPA, other federal agencies, as well as state, local, and tribal air pollution control agencies. Its component data sets have been collected over the years from approximately 10,000 monitoring sites, of which approximately 5,000 are currently active. OAR's Office of Air Quality Planning and Standards (OAQPS) and other internal and external users, rely on this data to assess air quality, assist in Attainment/Non-Attainment designations, evaluate State Implementation Plans for Non-Attainment Areas, perform modeling for permit review analysis, and other air quality management functions. Air quality information is also used to prepare reports for Congress as mandated by the Clean Air Act. This data covers air quality data collected after 1980, when the Clean Air Act requirements for monitoring were significantly modified. Air quality data from the Agency's early years (1970s) remains available (see OAR PRIMARY DATA ASSET: Ambient Air Quality Data -- Historical), but because of technical and definitional differences the two data assets are not directly comparable. The Clean Air Act of 1970 provided initial authority for monitoring air quality for Conventional Air Pollutants (CAPs) for which EPA has promulgated National Ambient Air Quality Standards (NAAQS). Requirements for monitoring visibility-related parameters were added in 1977. Requiremen

  7. Revisiting the Procedures for the Vector Data Quality Assurance in Practice

    NASA Astrophysics Data System (ADS)

    Erdoğan, M.; Torun, A.; Boyacı, D.

    2012-07-01

    Immense use of topographical data in spatial data visualization, business GIS (Geographic Information Systems) solutions and applications, mobile and location-based services forced the topo-data providers to create standard, up-to-date and complete data sets in a sustainable frame. Data quality has been studied and researched for more than two decades. There have been un-countable numbers of references on its semantics, its conceptual logical and representations and many applications on spatial databases and GIS. However, there is a gap between research and practice in the sense of spatial data quality which increases the costs and decreases the efficiency of data production. Spatial data quality is well-known by academia and industry but usually in different context. The research on spatial data quality stated several issues having practical use such as descriptive information, metadata, fulfillment of spatial relationships among data, integrity measures, geometric constraints etc. The industry and data producers realize them in three stages; pre-, co- and post data capturing. The pre-data capturing stage covers semantic modelling, data definition, cataloguing, modelling, data dictionary and schema creation processes. The co-data capturing stage covers general rules of spatial relationships, data and model specific rules such as topologic and model building relationships, geometric threshold, data extraction guidelines, object-object, object-belonging class, object-non-belonging class, class-class relationships to be taken into account during data capturing. And post-data capturing stage covers specified QC (quality check) benchmarks and checking compliance to general and specific rules. The vector data quality criteria are different from the views of producers and users. But these criteria are generally driven by the needs, expectations and feedbacks of the users. This paper presents a practical method which closes the gap between theory and practice. Development of spatial data quality concepts into developments and application requires existence of conceptual, logical and most importantly physical existence of data model, rules and knowledge of realization in a form of geo-spatial data. The applicable metrics and thresholds are determined on this concrete base. This study discusses application of geo-spatial data quality issues and QA (quality assurance) and QC procedures in the topographic data production. Firstly we introduce MGCP (Multinational Geospatial Co-production Program) data profile of NATO (North Atlantic Treaty Organization) DFDD (DGIWG Feature Data Dictionary), the requirements of data owner, the view of data producers for both data capturing and QC and finally QA to fulfil user needs. Then, our practical and new approach which divides the quality into three phases is introduced. Finally, implementation of our approach to accomplish metrics, measures and thresholds of quality definitions is discussed. In this paper, especially geometry and semantics quality and quality control procedures that can be performed by the producers are discussed. Some applicable best-practices that we experienced on techniques of quality control, defining regulations that define the objectives and data production procedures are given in the final remarks. These quality control procedures should include the visual checks over the source data, captured vector data and printouts, some automatic checks that can be performed by software and some semi-automatic checks by the interaction with quality control personnel. Finally, these quality control procedures should ensure the geometric, semantic, attribution and metadata quality of vector data.

  8. Calculating the quality of public high-throughput sequencing data to obtain a suitable subset for reanalysis from the Sequence Read Archive

    PubMed Central

    Nakazato, Takeru; Bono, Hidemasa

    2017-01-01

    Abstract It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. PMID:28449062

  9. Data Quality Monitoring in Clinical Trials: Has It Been Worth It? An Evaluation and Prediction of the Future by All Stakeholders

    PubMed Central

    Kalali, Amir; West, Mark; Walling, David; Hilt, Dana; Engelhardt, Nina; Alphs, Larry; Loebel, Antony; Vanover, Kim; Atkinson, Sarah; Opler, Mark; Sachs, Gary; Nations, Kari; Brady, Chris

    2016-01-01

    This paper summarizes the results of the CNS Summit Data Quality Monitoring Workgroup analysis of current data quality monitoring techniques used in central nervous system (CNS) clinical trials. Based on audience polls conducted at the CNS Summit 2014, the panel determined that current techniques used to monitor data and quality in clinical trials are broad, uncontrolled, and lack independent verification. The majority of those polled endorse the value of monitoring data. Case examples of current data quality methodology are presented and discussed. Perspectives of pharmaceutical companies and trial sites regarding data quality monitoring are presented. Potential future developments in CNS data quality monitoring are described. Increased utilization of biomarkers as objective outcomes and for patient selection is considered to be the most impactful development in data quality monitoring over the next 10 years. Additional future outcome measures and patient selection approaches are discussed. PMID:27413584

  10. [Informatics data quality and management].

    PubMed

    Feng, Rung-Chuang

    2009-06-01

    While the quality of data affects every aspect of business, it is frequently overlooked in terms of customer data integration, data warehousing, business intelligence and enterprise applications. Regardless of which data terms are used, a high level of data quality is a critical base condition essential to satisfy user needs and facilitate the development of effective applications. In this paper, the author introduces methods, a management framework and the major factors involved in data quality assessment. Author also integrates expert opinions to develop data quality assessment tools.

  11. Data-quality measures for stakeholder-implemented watershed-monitoring programs

    USGS Publications Warehouse

    Greve, Adrienne I.

    2002-01-01

    Community-based watershed groups, many of which collect environmental data, have steadily increased in number over the last decade. The data generated by these programs are often underutilized due to uncertainty in the quality of data produced. The incorporation of data-quality measures into stakeholder monitoring programs lends statistical validity to data. Data-quality measures are divided into three steps: quality assurance, quality control, and quality assessment. The quality-assurance step attempts to control sources of error that cannot be directly quantified. This step is part of the design phase of a monitoring program and includes clearly defined, quantifiable objectives, sampling sites that meet the objectives, standardized protocols for sample collection, and standardized laboratory methods. Quality control (QC) is the collection of samples to assess the magnitude of error in a data set due to sampling, processing, transport, and analysis. In order to design a QC sampling program, a series of issues needs to be considered: (1) potential sources of error, (2) the type of QC samples, (3) inference space, (4) the number of QC samples, and (5) the distribution of the QC samples. Quality assessment is the process of evaluating quality-assurance measures and analyzing the QC data in order to interpret the environmental data. Quality assessment has two parts: one that is conducted on an ongoing basis as the monitoring program is running, and one that is conducted during the analysis of environmental data. The discussion of the data-quality measures is followed by an example of their application to a monitoring program in the Big Thompson River watershed of northern Colorado.

  12. ESIP Information Quality Cluster (IQC)

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Peng, Ge; Moroni, David F.

    2016-01-01

    The Information Quality Cluster (IQC) within the Federation of Earth Science Information Partners (ESIP) was initially formed in 2011 and has evolved significantly over time. The current objectives of the IQC are to: 1. Actively evaluate community data quality best practices and standards; 2. Improve capture, description, discovery, and usability of information about data quality in Earth science data products; 3. Ensure producers of data products are aware of standards and best practices for conveying data quality, and data providers distributors intermediaries establish, improve and evolve mechanisms to assist users in discovering and understanding data quality information; and 4. Consistently provide guidance to data managers and stewards on how best to implement data quality standards and best practices to ensure and improve maturity of their data products. The activities of the IQC include: 1. Identification of additional needs for consistently capturing, describing, and conveying quality information through use case studies with broad and diverse applications; 2. Establishing and providing community-wide guidance on roles and responsibilities of key players and stakeholders including users and management; 3. Prototyping of conveying quality information to users in a more consistent, transparent, and digestible manner; 4. Establishing a baseline of standards and best practices for data quality; 5. Evaluating recommendations from NASA's DQWG in a broader context and proposing possible implementations; and 6. Engaging data providers, data managers, and data user communities as resources to improve our standards and best practices. Following the principles of openness of the ESIP Federation, IQC invites all individuals interested in improving capture, description, discovery, and usability of information about data quality in Earth science data products to participate in its activities.

  13. Data Quality- and Master Data Management - A Hospital Case.

    PubMed

    Arthofer, Klaus; Girardi, Dominic

    2017-01-01

    Poor data quality prevents the analysis of data for decisions which are critical for business. It also has a negative impact on business processes. Nevertheless the maturity level of data quality- and master data management is still insufficient in many organizations nowadays. This article discusses the corresponding maturity of companies and a management cycle integrating data quality- and master data management in a case dealing with benchmarking in hospitals. In conclusion if data quality and master data are not properly managed, structured data should not be acquired in the first place due to the added expense and complexity.

  14. Better Data Quality for Better Healthcare Research Results - A Case Study.

    PubMed

    Hart, Robert; Kuo, Mu-Hsing

    2017-01-01

    Electronic Health Records (EHRs) have been identified as a key tool to collect data for healthcare research. However, EHR data must be of sufficient quality to support quality research results. Island Health, BC, Canada has invested and continues to invest in the development of solutions to address the quality of its EHR data and support high quality healthcare studies. This paper examines Island Health's data quality engine, its development and its successful implementation.

  15. A scope classification of data quality requirements for food composition data.

    PubMed

    Presser, Karl; Hinterberger, Hans; Weber, David; Norrie, Moira

    2016-02-15

    Data quality is an important issue when managing food composition data since the usage of the data can have a significant influence on policy making and further research. Although several frameworks for data quality have been proposed, general tools and measures are still lacking. As a first step in this direction, we investigated data quality requirements for an information system to manage food composition data, called FoodCASE. The objective of our investigation was to find out if different requirements have different impacts on the intrinsic data quality that must be regarded during data quality assessment and how these impacts can be described. We refer to the resulting classification with its categories as the scope classification of data quality requirements. As proof of feasibility, the scope classification has been implemented in the FoodCASE system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Using Feedback from Data Consumers to Capture Quality Information on Environmental Research Data

    NASA Astrophysics Data System (ADS)

    Devaraju, A.; Klump, J. F.

    2015-12-01

    Data quality information is essential to facilitate reuse of Earth science data. Recorded quality information must be sufficient for other researchers to select suitable data sets for their analysis and confirm the results and conclusions. In the research data ecosystem, several entities are responsible for data quality. Data producers (researchers and agencies) play a major role in this aspect as they often include validation checks or data cleaning as part of their work. It is possible that the quality information is not supplied with published data sets; if it is available, the descriptions might be incomplete, ambiguous or address specific quality aspects. Data repositories have built infrastructures to share data, but not all of them assess data quality. They normally provide guidelines of documenting quality information. Some suggests that scholarly and data journals should take a role in ensuring data quality by involving reviewers to assess data sets used in articles, and incorporating data quality criteria in the author guidelines. However, this mechanism primarily addresses data sets submitted to journals. We believe that data consumers will complement existing entities to assess and document the quality of published data sets. This has been adopted in crowd-source platforms such as Zooniverse, OpenStreetMap, Wikipedia, Mechanical Turk and Tomnod. This paper presents a framework designed based on open source tools to capture and share data users' feedback on the application and assessment of research data. The framework comprises a browser plug-in, a web service and a data model such that feedback can be easily reported, retrieved and searched. The feedback records are also made available as Linked Data to promote integration with other sources on the Web. Vocabularies from Dublin Core and PROV-O are used to clarify the source and attribution of feedback. The application of the framework is illustrated with the CSIRO's Data Access Portal.

  17. Application of ESE Data and Tools to Air Quality Management: Services for Helping the Air Quality Community use ESE Data (SHAirED)

    NASA Technical Reports Server (NTRS)

    Falke, Stefan; Husar, Rudolf

    2011-01-01

    The goal of this REASoN applications and technology project is to deliver and use Earth Science Enterprise (ESE) data and tools in support of air quality management. Its scope falls within the domain of air quality management and aims to develop a federated air quality information sharing network that includes data from NASA, EPA, US States and others. Project goals were achieved through a access of satellite and ground observation data, web services information technology, interoperability standards, and air quality community collaboration. In contributing to a network of NASA ESE data in support of particulate air quality management, the project will develop access to distributed data, build Web infrastructure, and create tools for data processing and analysis. The key technologies used in the project include emerging web services for developing self describing and modular data access and processing tools, and service oriented architecture for chaining web services together to assemble customized air quality management applications. The technology and tools required for this project were developed within DataFed.net, a shared infrastructure that supports collaborative atmospheric data sharing and processing web services. Much of the collaboration was facilitated through community interactions through the Federation of Earth Science Information Partners (ESIP) Air Quality Workgroup. The main activities during the project that successfully advanced DataFed, enabled air quality applications and established community-oriented infrastructures were: develop access to distributed data (surface and satellite), build Web infrastructure to support data access, processing and analysis create tools for data processing and analysis foster air quality community collaboration and interoperability.

  18. 40 CFR 51.115 - Air quality data and projections.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Air quality data and projections. 51... quality data and projections. (a) Each plan must contain a summary of data showing existing air quality. (b) Each plan must: (1) Contain a summary of air quality concentrations expected to result from...

  19. 40 CFR 51.115 - Air quality data and projections.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Air quality data and projections. 51... quality data and projections. (a) Each plan must contain a summary of data showing existing air quality. (b) Each plan must: (1) Contain a summary of air quality concentrations expected to result from...

  20. 40 CFR 51.115 - Air quality data and projections.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Air quality data and projections. 51... quality data and projections. (a) Each plan must contain a summary of data showing existing air quality. (b) Each plan must: (1) Contain a summary of air quality concentrations expected to result from...

  1. 40 CFR 51.115 - Air quality data and projections.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Air quality data and projections. 51... quality data and projections. (a) Each plan must contain a summary of data showing existing air quality. (b) Each plan must: (1) Contain a summary of air quality concentrations expected to result from...

  2. 76 FR 28696 - Approval and Promulgation of Air Quality Implementation Plans; California; Determination of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... based on complete, quality-assured and certified ambient air quality monitoring data for 2007-2009... certain air quality monitoring data because they meet the criteria for ozone exceptional events that are... certified monitoring data. A violation occurs when the ambient ozone air quality monitoring data show...

  3. 40 CFR 51.115 - Air quality data and projections.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 2 2010-07-01 2010-07-01 false Air quality data and projections. 51... quality data and projections. (a) Each plan must contain a summary of data showing existing air quality. (b) Each plan must: (1) Contain a summary of air quality concentrations expected to result from...

  4. Calculating the quality of public high-throughput sequencing data to obtain a suitable subset for reanalysis from the Sequence Read Archive.

    PubMed

    Ohta, Tazro; Nakazato, Takeru; Bono, Hidemasa

    2017-06-01

    It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. © The Authors 2017. Published by Oxford University Press.

  5. Data Quality Screening Service

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan

    2013-01-01

    A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.

  6. [Quality assurance in interventional cardiology].

    PubMed

    Gülker, H

    2009-10-01

    Quality assurance in clinical studies aiming at approval of pharmaceutical products is submitted to strict rules, controls and auditing regulations. Comparative instruments to ensure quality in diagnostic and therapeutic procedures are not available in interventional cardiology, likewise in other fields of cardiovascular medicine. Quality assurance simply consists of "quality registers" with basic data not externally controlled. Based on the experiences of clinical studies and their long history of standardization it is assumed that these data may be severely flawed thus being inappropriate to set standards for diagnostic and therapeutic strategies. The precondition for quality assurance are quality data. In invasive coronary angiography and intervention medical indications, the decision making process interventional versus surgical revascularization, technical performance and after - care are essential aspects affecting quality of diagnostics and therapy. Quality data are externally controlled data. To collect quality data an appropriate infrastructure is a necessary precondition which is not existent. For an appropriate infrastructure investments have to be done both to build up as well as to sustain the necessary preconditions. As long as there are no infrastructure and no investments there will be no "quality data". There exist simply registers of data which are not proved to be a basis for significant assurance and enhancement in quality in interventional coronary cardiology. Georg Thieme Verlag KG Stuttgart, New York.

  7. Guidance on Data Quality Assessment for Life Cycle Inventory ...

    EPA Pesticide Factsheets

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities within the LCA community have developed different methodologies to address and communicate the data quality of Life Cycle Inventory (LCI) data. Despite advances in this field, the LCA community is still plagued by the lack of reproducible data quality results and documentation. To address these issues, US EPA has created this guidance in order to further support reproducible life cycle inventory data quality results and to inform users of the proper application of the US EPA supported data quality system. The work for this report was begun in December 2014 and completed as of April 2016.The updated data quality system includes a novel approach to the pedigree matrix by addressing data quality at the flow and the process level. Flow level indicators address source reliability, temporal correlation, geographic correlation, technological correlation and data sampling methods. The process level indicators address the level of review the unit process has undergone and its completeness. This guidance is designed to be updatable as part of the LCA Research Center’s continuing commitment to data quality advancements. Life cycle assessment is increasingly being used as a tool to identify areas of

  8. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    USGS Publications Warehouse

    Arnold, Terri L.; Desimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, MaryLynn; Kingsbury, James A.; Belitz, Kenneth

    2016-06-20

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  9. Analytical approaches to quality assurance and quality control in rangeland monitoring data

    USDA-ARS?s Scientific Manuscript database

    Producing quality data to support land management decisions is the goal of every rangeland monitoring program. However, the results of quality assurance (QA) and quality control (QC) efforts to improve data quality are rarely reported. The purpose of QA and QC is to prevent and describe non-sampling...

  10. Vulnerable patients' perceptions of health care quality and quality data.

    PubMed

    Raven, Maria Catherine; Gillespie, Colleen C; DiBennardo, Rebecca; Van Busum, Kristin; Elbel, Brian

    2012-01-01

    Little is known about how patients served by safety-net hospitals utilize and respond to hospital quality data. To understand how vulnerable, lower income patients make health care decisions and define quality of care and whether hospital quality data factor into such decisions and definitions. Mixed quantitative and qualitative methods were used to gather primary data from patients at an urban, tertiary-care safety-net hospital. The study hospital is a member of the first public hospital system to voluntarily post hospital quality data online for public access. Patients were recruited from outpatient and inpatient clinics. Surveys were used to collect data on participants' sociodemographic characteristics, health literacy, health care experiences, and satisfaction variables. Focus groups were used to explore a representative sample of 24 patients' health care decision making and views of quality. Data from focus group transcripts were iteratively coded and analyzed by the authors. Focus group participants were similar to the broader diverse, low-income clinic. Participants reported exercising choice in making decisions about where to seek health care. Multiple sources influenced decision-making processes including participants' own beliefs and values, social influences, and prior experiences. Hospital quality data were notably absent as a source of influence in health care decision making for this population largely because participants were unaware of its existence. Participants' views of hospital quality were influenced by the quality and efficiency of services provided (with an emphasis on the doctor-patient relationship) and patient centeredness. When presented with it, patients appreciated the hospital quality data and, with guidance, were interested in incorporating it into health care decision making. Results suggest directions for optimizing the presentation, content, and availability of hospital quality data. Future research will explore how similar populations form and make choices based on presentation of hospital quality data.

  11. Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives.

    PubMed

    Chelico, John D; Wilcox, Adam B; Vawdrey, David K; Kuperman, Gilad J

    2016-01-01

    Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement.

  12. Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives

    PubMed Central

    Chelico, John D.; Wilcox, Adam B.; Vawdrey, David K.; Kuperman, Gilad J.

    2016-01-01

    Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement. PMID:28269833

  13. The Significance of Quality Assurance within Model Intercomparison Projects at the World Data Centre for Climate (WDCC)

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Hoeck, H.; Stockhause, M.; Lautenschlager, M.

    2014-12-01

    The classical goals of a quality assessment system in the data life cycle are (1) to encourage data creators to improve their quality assessment procedures to reach the next quality level and (2) enable data consumers to decide, whether a dataset has a quality that is sufficient for usage in the target application, i.e. to appraise the data usability for their own purpose.As the data volumes of projects and the interdisciplinarity of data usage grow, the need for homogeneous structure and standardised notation of data and metadata increases. This third aspect is especially valid for the data repositories, as they manage data through machine agents. So checks for homogeneity and consistency in early parts of the workflow become essential to cope with today's data volumes.Selected parts of the workflow in the model intercomparison project CMIP5 and the archival of the data for the interdiscipliary user community of the IPCC-DDC AR5 and the associated quality checks are reviewed. We compare data and metadata checks and relate different types of checks to their positions in the data life cycle.The project's data citation approach is included in the discussion, with focus on temporal aspects of the time necessary to comply with the project's requirements for formal data citations and the demand for the availability of such data citations.In order to make different quality assessments of projects comparable, WDCC developed a generic Quality Assessment System. Based on the self-assessment approach of a maturity matrix, an objective and uniform quality level system for all data at WDCC is derived which consists of five maturity quality levels.

  14. Quality data collection and management technology of aerospace complex product assembly process

    NASA Astrophysics Data System (ADS)

    Weng, Gang; Liu, Jianhua; He, Yongxi; Zhuang, Cunbo

    2017-04-01

    Aiming at solving problems of difficult management and poor traceability for discrete assembly process quality data, a data collection and management method is proposed which take the assembly process and BOM as the core. Data collection method base on workflow technology, data model base on BOM and quality traceability of assembly process is included in the method. Finally, assembly process quality data management system is developed and effective control and management of quality information for complex product assembly process is realized.

  15. Data Quality in Rare Diseases Registries.

    PubMed

    Kodra, Yllka; Posada de la Paz, Manuel; Coi, Alessio; Santoro, Michele; Bianchi, Fabrizio; Ahmed, Faisal; Rubinstein, Yaffa R; Weinbach, Jérôme; Taruscio, Domenica

    2017-01-01

    In the field of rare diseases, registries are considered power tool to develop clinical research, to facilitate the planning of appropriate clinical trials, to improve patient care and healthcare planning. Therefore high quality data of rare diseases registries is considered to be one of the most important element in the establishment and maintenance of a registry. Data quality can be defined as the totality of features and characteristics of data set that bear on its ability to satisfy the needs that result from the intended use of the data. In the context of registries, the 'product' is data, and quality refers to data quality, meaning that the data coming into the registry have been validated, and ready for use for analysis and research. Determining the quality of data is possible through data assessment against a number of dimensions: completeness, validity; coherence and comparability; accessibility; usefulness; timeliness; prevention of duplicate records. Many others factors may influence the quality of a registry: development of standardized Case Report Form and security/safety controls of informatics infrastructure. With the growing number of rare diseases registries being established, there is a need to develop a quality validation process to evaluate the quality of each registry. A clear description of the registry is the first step when assessing data quality or the registry evaluation system. Here we report a template as a guide for helping registry owners to describe their registry.

  16. High quality data: An evaluation of AIM data quality and data quality procedures

    USDA-ARS?s Scientific Manuscript database

    The goal of every monitoring program is to collect high-quality data which can then be used to provide information to decision makers. The Bureau of Land Management (BLM) Assessment, Inventory, and Monitoring (AIM) program is one such data set which provides rangeland status, condition, and trend in...

  17. [The measurement of data quality in censuses of population and housing].

    PubMed

    1980-01-01

    The determination of data quality in population and housing censuses is discussed. Principal types of errors commonly found in census data are reviewed, and the parameters used to evaluate data quality are described. Various methods for measuring data quality are outlined and possible applications of the methods are illustrated using Canadian examples

  18. [Clinical trial data management and quality metrics system].

    PubMed

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  19. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Quality evaluation data... SYSTEM, DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT QUALITY ASSURANCE Government Contract Quality Assurance 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the...

  20. Quality of Big Data in health care.

    PubMed

    Sukumar, Sreenivas R; Natarajan, Ramachandran; Ferrell, Regina K

    2015-01-01

    The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business, operational and clinical intelligence. However, the critical issue of data quality required for these models is not getting the attention it deserves. The purpose of this paper is to highlight the issues of data quality in the context of Big Data health care analytics. The insights presented in this paper are the results of analytics work that was done in different organizations on a variety of health data sets. The data sets include Medicare and Medicaid claims, provider enrollment data sets from both public and private sources, electronic health records from regional health centers accessed through partnerships with health care claims processing entities under health privacy protected guidelines. Assessment of data quality in health care has to consider: first, the entire lifecycle of health data; second, problems arising from errors and inaccuracies in the data itself; third, the source(s) and the pedigree of the data; and fourth, how the underlying purpose of data collection impact the analytic processing and knowledge expected to be derived. Automation in the form of data handling, storage, entry and processing technologies is to be viewed as a double-edged sword. At one level, automation can be a good solution, while at another level it can create a different set of data quality issues. Implementation of health care analytics with Big Data is enabled by a road map that addresses the organizational and technological aspects of data quality assurance. The value derived from the use of analytics should be the primary determinant of data quality. Based on this premise, health care enterprises embracing Big Data should have a road map for a systematic approach to data quality. Health care data quality problems can be so very specific that organizations might have to build their own custom software or data quality rule engines. Today, data quality issues are diagnosed and addressed in a piece-meal fashion. The authors recommend a data lifecycle approach and provide a road map, that is more appropriate with the dimensions of Big Data and fits different stages in the analytical workflow.

  1. Ambiguity of Quality in Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Leptoukh, Greg

    2010-01-01

    This slide presentation reviews some of the issues in quality of remote sensing data. Data "quality" is used in several different contexts in remote sensing data, with quite different meanings. At the pixel level, quality typically refers to a quality control process exercised by the processing algorithm, not an explicit declaration of accuracy or precision. File level quality is usually a statistical summary of the pixel-level quality but is of doubtful use for scenes covering large areal extents. Quality at the dataset or product level, on the other hand, usually refers to how accurately the dataset is believed to represent the physical quantities it purports to measure. This assessment often bears but an indirect relationship at best to pixel level quality. In addition to ambiguity at different levels of granularity, ambiguity is endemic within levels. Pixel-level quality terms vary widely, as do recommendations for use of these flags. At the dataset/product level, quality for low-resolution gridded products is often extrapolated from validation campaigns using high spatial resolution swath data, a suspect practice at best. Making use of quality at all levels is complicated by the dependence on application needs. We will present examples of the various meanings of quality in remote sensing data and possible ways forward toward a more unified and usable quality framework.

  2. Improving data quality and supervision of antiretroviral therapy sites in Malawi: an application of Lot Quality Assurance Sampling

    PubMed Central

    2012-01-01

    Background High quality program data is critical for managing, monitoring, and evaluating national HIV treatment programs. By 2009, the Malawi Ministry of Health had initiated more than 270,000 patients on HIV treatment at 377 sites. Quarterly supervision of these antiretroviral therapy (ART) sites ensures high quality care, but the time currently dedicated to exhaustive record review and data cleaning detracts from other critical components. The exhaustive record review is unlikely to be sustainable long term because of the resources required and increasing number of patients on ART. This study quantifies the current levels of data quality and evaluates Lot Quality Assurance Sampling (LQAS) as a tool to prioritize sites with low data quality, thus lowering costs while maintaining sufficient quality for program monitoring and patient care. Methods In January 2010, a study team joined supervision teams at 19 sites purposely selected to reflect the variety of ART sites. During the exhaustive data review, the time allocated to data cleaning and data discrepancies were documented. The team then randomly sampled 76 records from each site, recording secondary outcomes and the time required for sampling. Results At the 19 sites, only 1.2% of records had discrepancies in patient outcomes and 0.4% in treatment regimen. However, data cleaning took 28.5 hours in total, suggesting that data cleaning for all 377 ART sites would require over 350 supervision-hours quarterly. The LQAS tool accurately identified the sites with the low data quality, reduced the time for data cleaning by 70%, and allowed for reporting on secondary outcomes. Conclusions Most sites maintained high quality records. In spite of this, data cleaning required significant amounts of time with little effect on program estimates of patient outcomes. LQAS conserves resources while maintaining sufficient data quality for program assessment and management to allow for quality patient care. PMID:22776745

  3. Improving data quality and supervision of antiretroviral therapy sites in Malawi: an application of Lot Quality Assurance Sampling.

    PubMed

    Hedt-Gauthier, Bethany L; Tenthani, Lyson; Mitchell, Shira; Chimbwandira, Frank M; Makombe, Simon; Chirwa, Zengani; Schouten, Erik J; Pagano, Marcello; Jahn, Andreas

    2012-07-09

    High quality program data is critical for managing, monitoring, and evaluating national HIV treatment programs. By 2009, the Malawi Ministry of Health had initiated more than 270,000 patients on HIV treatment at 377 sites. Quarterly supervision of these antiretroviral therapy (ART) sites ensures high quality care, but the time currently dedicated to exhaustive record review and data cleaning detracts from other critical components. The exhaustive record review is unlikely to be sustainable long term because of the resources required and increasing number of patients on ART. This study quantifies the current levels of data quality and evaluates Lot Quality Assurance Sampling (LQAS) as a tool to prioritize sites with low data quality, thus lowering costs while maintaining sufficient quality for program monitoring and patient care. In January 2010, a study team joined supervision teams at 19 sites purposely selected to reflect the variety of ART sites. During the exhaustive data review, the time allocated to data cleaning and data discrepancies were documented. The team then randomly sampled 76 records from each site, recording secondary outcomes and the time required for sampling. At the 19 sites, only 1.2% of records had discrepancies in patient outcomes and 0.4% in treatment regimen. However, data cleaning took 28.5 hours in total, suggesting that data cleaning for all 377 ART sites would require over 350 supervision-hours quarterly. The LQAS tool accurately identified the sites with the low data quality, reduced the time for data cleaning by 70%, and allowed for reporting on secondary outcomes. Most sites maintained high quality records. In spite of this, data cleaning required significant amounts of time with little effect on program estimates of patient outcomes. LQAS conserves resources while maintaining sufficient data quality for program assessment and management to allow for quality patient care.

  4. Data Sources for an Environmental Quality Index: Availability, Quality, and Utility

    PubMed Central

    Rappazzo, Kristen; Messer, Lynne C.

    2011-01-01

    Objectives. An environmental quality index (EQI) for all counties in the United States is under development to explore the relationship between environmental insults and human health. The EQI is potentially useful for investigators researching health disparities to account for other concurrent environmental conditions. This article focused on the identification and assessment of data sources used in developing the EQI. Data source strengths, limitations, and utility were addressed. Methods. Five domains were identified that contribute to environmental quality: air, water, land, built, and sociodemographic environments. An inventory of possible data sources was created. Data sources were evaluated for appropriate spatial and temporal coverage and data quality. Results. The overall data inventory identified multiple data sources for each domain. From the inventory (187 sources, 617 records), the air, water, land, built environment, and sociodemographic domains retained 2, 9, 7, 4, and 2 data sources for inclusion in the EQI, respectively. However, differences in data quality, geographic coverage, and data availability existed between the domains. Conclusions. The data sources identified for use in the EQI may be useful to researchers, advocates, and communities to explore specific environmental quality questions. PMID:21836111

  5. Quality-assurance and data-management plan for water-quality activities in the Kansas Water Science Center, 2014

    USGS Publications Warehouse

    Rasmussen, Teresa J.; Bennett, Trudy J.; Foster, Guy M.; Graham, Jennifer L.; Putnam, James E.

    2014-01-01

    As the Nation’s largest water, earth, and biological science and civilian mapping information agency, the U.S. Geological Survey is relied on to collect high-quality data, and produce factual and impartial interpretive reports. This quality-assurance and data-management plan provides guidance for water-quality activities conducted by the Kansas Water Science Center. Policies and procedures are documented for activities related to planning, collecting, storing, documenting, tracking, verifying, approving, archiving, and disseminating water-quality data. The policies and procedures described in this plan complement quality-assurance plans for continuous water-quality monitoring, surface-water, and groundwater activities in Kansas.

  6. Multisite Evaluation of a Data Quality Tool for Patient-Level Clinical Data Sets

    PubMed Central

    Huser, Vojtech; DeFalco, Frank J.; Schuemie, Martijn; Ryan, Patrick B.; Shang, Ning; Velez, Mark; Park, Rae Woong; Boyce, Richard D.; Duke, Jon; Khare, Ritu; Utidjian, Levon; Bailey, Charles

    2016-01-01

    Introduction: Data quality and fitness for analysis are crucial if outputs of analyses of electronic health record data or administrative claims data should be trusted by the public and the research community. Methods: We describe a data quality analysis tool (called Achilles Heel) developed by the Observational Health Data Sciences and Informatics Collaborative (OHDSI) and compare outputs from this tool as it was applied to 24 large healthcare datasets across seven different organizations. Results: We highlight 12 data quality rules that identified issues in at least 10 of the 24 datasets and provide a full set of 71 rules identified in at least one dataset. Achilles Heel is a freely available software that provides a useful starter set of data quality rules with the ability to add additional rules. We also present results of a structured email-based interview of all participating sites that collected qualitative comments about the value of Achilles Heel for data quality evaluation. Discussion: Our analysis represents the first comparison of outputs from a data quality tool that implements a fixed (but extensible) set of data quality rules. Thanks to a common data model, we were able to compare quickly multiple datasets originating from several countries in America, Europe and Asia. PMID:28154833

  7. An Architecture for Continuous Data Quality Monitoring in Medical Centers.

    PubMed

    Endler, Gregor; Schwab, Peter K; Wahl, Andreas M; Tenschert, Johannes; Lenz, Richard

    2015-01-01

    In the medical domain, data quality is very important. Since requirements and data change frequently, continuous and sustainable monitoring and improvement of data quality is necessary. Working together with managers of medical centers, we developed an architecture for a data quality monitoring system. The architecture enables domain experts to adapt the system during runtime to match their specifications using a built-in rule system. It also allows arbitrarily complex analyses to be integrated into the monitoring cycle. We evaluate our architecture by matching its components to the well-known data quality methodology TDQM.

  8. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Quality evaluation data...

  9. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Quality evaluation data...

  10. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Quality evaluation data...

  11. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Quality evaluation data...

  12. 40 CFR 58.16 - Data submittal and archiving requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Other Federal agencies may request access to filters for purposes of supporting air quality management... PROGRAMS (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.16 Data submittal and..., via AQS all ambient air quality data and associated quality assurance data for SO2; CO; O3; NO2; NO...

  13. Ensuring the Quality of Data Packages in the LTER Network Provenance Aware Synthesis Tracking Architecture Data Management System and Archive

    NASA Astrophysics Data System (ADS)

    Servilla, M. S.; O'Brien, M.; Costa, D.

    2013-12-01

    Considerable ecological research performed today occurs through the analysis of data downloaded from various repositories and archives, often resulting in derived or synthetic products generated by automated workflows. These data are only meaningful for research if they are well documented by metadata, lest semantic or data type errors may occur in interpretation or processing. The Long Term Ecological Research (LTER) Network now screens all data packages entering its long-term archive to ensure that each package contains metadata that is complete, of high quality, and accurately describes the structure of its associated data entity and the data are structurally congruent to the metadata. Screening occurs prior to the upload of a data package into the Provenance Aware Synthesis Tracking Architecture (PASTA) data management system through a series of quality checks, thus preventing ambiguously or incorrectly documented data packages from entering the system. The quality checks within PASTA are designed to work specifically with the Ecological Metadata Language (EML), the metadata standard adopted by the LTER Network to describe data generated by their 26 research sites. Each quality check is codified in Java as part of the ecological community-supported Data Manager Library, which is a resource of the EML specification and used as a component of the PASTA software stack. Quality checks test for metadata quality, data integrity, or metadata-data congruence. Quality checks are further classified as either conditional or informational. Conditional checks issue a 'valid', 'warning' or 'error' response. Only an 'error' response blocks the data package from upload into PASTA. Informational checks only provide descriptive content pertaining to a particular facet of the data package. Quality checks are designed by a group of LTER information managers and reviewed by the LTER community before deploying into PASTA. A total of 32 quality checks have been deployed to date. Quality checks can be customized through a configurable template, which includes turning checks 'on' or 'off' and setting the severity of conditional checks. This feature is important to other potential users of the Data Manager Library who wish to configure its quality checks in accordance with the standards of their community. Executing the complete set of quality checks produces a report that describes the result of each check. The report is an XML document that is stored by PASTA for future reference.

  14. Data Validation & Laboratory Quality Assurance for Region 9

    EPA Pesticide Factsheets

    In all hazardous site investigations it is essential to know the quality of the data used for decision-making purposes. Validation of data requires that appropriate quality assurance and quality control (QA/QC) procedures be followed.

  15. Surface-water-quality assessment of the Kentucky River Basin, Kentucky; fixed-station network and selected water-quality data, April 1987 through August 1991

    USGS Publications Warehouse

    Griffin, M.S.; Martin, G.R.; White, K.D.

    1994-01-01

    This report describes selected data-collection activities and the associated data collected during the Kentucky River Basin pilot study of the U.S. Geological Survey's National Water-Quality Assessment Program. The data are intended to provide a nationally consistent description and improved understanding of current water quality in the basin. The data were collected at seven fixed stations that represent stream cross sections where constituent transport and water-quality trends can be evaluated. The report includes descriptions of (1) the basin; (2) the design of the fixed-station network; (3) the fixed-station sites; (4) the physical and chemical measurements; (5) the methods of sample collection, processing, and analysis; and (6) the quality-assurance and quality-control procedures. Water-quality data collected at the fixed stations during routine periodic sampling and supplemental high-flow sampling from April 1987 to August 1991 are presented.

  16. Comparison of Data Quality of NOAA's ISIS and SURFRAD Networks to NREL's SRRL-BMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderberg, M.; Sengupta, M.

    2014-11-01

    This report provides analyses of broadband solar radiometric data quality for the National Oceanic and Atmospheric Administration's Integrated Surface Irradiance Study and Surface Radiation Budget Network (SURFRAD) solar measurement networks. The data quality of these networks is compared to that of the National Renewable Energy Laboratory's Solar Radiation Research Laboratory Baseline Measurement System (SRRL-BMS) native data resolutions and hourly averages of the data from the years 2002 through 2013. This report describes the solar radiometric data quality testing and flagging procedures and the method used to determine and tabulate data quality statistics. Monthly data quality statistics for each network weremore » plotted by year against the statistics for the SRRL-BMS. Some of the plots are presented in the body of the report, but most are in the appendix. These plots indicate that the overall solar radiometric data quality of the SURFRAD network is superior to that of the Integrated Surface Irradiance Study network and can be comparable to SRRL-BMS.« less

  17. Low-Quality Structural and Interaction Data Improves Binding Affinity Prediction via Random Forest.

    PubMed

    Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J

    2015-06-12

    Docking scoring functions can be used to predict the strength of protein-ligand binding. It is widely believed that training a scoring function with low-quality data is detrimental for its predictive performance. Nevertheless, there is a surprising lack of systematic validation experiments in support of this hypothesis. In this study, we investigated to which extent training a scoring function with data containing low-quality structural and binding data is detrimental for predictive performance. We actually found that low-quality data is not only non-detrimental, but beneficial for the predictive performance of machine-learning scoring functions, though the improvement is less important than that coming from high-quality data. Furthermore, we observed that classical scoring functions are not able to effectively exploit data beyond an early threshold, regardless of its quality. This demonstrates that exploiting a larger data volume is more important for the performance of machine-learning scoring functions than restricting to a smaller set of higher data quality.

  18. A comprehensive framework for data quality assessment in CER.

    PubMed

    Holve, Erin; Kahn, Michael; Nahm, Meredith; Ryan, Patrick; Weiskopf, Nicole

    2013-01-01

    The panel addresses the urgent need to ensure that comparative effectiveness research (CER) findings derived from diverse and distributed data sources are based on credible, high-quality data; and that the methods used to assess and report data quality are consistent, comprehensive, and available to data consumers. The panel consists of representatives from four teams leveraging electronic clinical data for CER, patient centered outcomes research (PCOR), and quality improvement (QI) and seeks to change the current paradigm where data quality assessment (DQA) is performed "behind the scenes" using one-off project specific methods. The panelists will present their process of harmonizing existing models for describing and measuring clinical data quality and will describe a comprehensive integrated framework for assessing and reporting DQA findings. The collaborative project is supported by the Electronic Data Methods (EDM) Forum, a three-year grant from the Agency for Healthcare Research and Quality (AHRQ) to facilitate learning and foster collaboration across a set of CER, PCOR, and QI projects designed to build infrastructure and methods for collecting and analyzing prospective data from electronic clinical data .

  19. Total Quality Management of Information System for Quality Assessment of Pesantren Using Fuzzy-SERVQUAL

    NASA Astrophysics Data System (ADS)

    Faizah, Arbiati; Syafei, Wahyul Amien; Isnanto, R. Rizal

    2018-02-01

    This research proposed a model combining an approach of Total Quality Management (TQM) and Fuzzy method of Service Quality (SERVQUAL) to asses service quality. TQM implementation was as quality management orienting on customer's satisfaction by involving all stakeholders. SERVQUAL model was used to measure quality service based on five dimensions such as tangible, reliability, responsiveness, assurance, and empathy. Fuzzy set theory was to accommodate subjectivity and ambiguity of quality assessment. Input data consisted of indicator data and quality assessment aspect. Input data was, then, processed to be service quality assessment questionnaires of Pesantren by using Fuzzy method to get service quality score. This process consisted of some steps as follows : inputting dimension and questionnaire data to data base system, filling questionnaire through system, then, system calculated fuzzification, defuzzification, gap of quality expected and received by service receivers, and calculating each dimension rating showing quality refinement priority. Rating of each quality dimension was, then, displayed at dashboard system to enable users to see information. From system having been built, it could be known that tangible dimension had the highest gap, -0.399, thus it needs to be prioritized and gets evaluation and refinement action soon.

  20. Systematic review of scope and quality of electronic patient record data in primary care

    PubMed Central

    Thiru, Krish; Hassey, Alan; Sullivan, Frank

    2003-01-01

    Objective To systematically review measures of data quality in electronic patient records (EPRs) in primary care. Design Systematic review of English language publications, 1980-2001. Data sources Bibliographic searches of medical databases, specialist medical informatics databases, conference proceedings, and institutional contacts. Study selection Studies selected according to a predefined framework for categorising review papers. Data extraction Reference standards and measurements used to judge quality. Results Bibliographic searches identified 4589 publications. After primary exclusions 174 articles were classified, 52 of which met the inclusion criteria for review. Selected studies were primarily descriptive surveys. Variability in methods prevented meta-analysis of results. Forty eight publications were concerned with diagnostic data, 37 studies measured data quality, and 15 scoped EPR quality. Reliability of data was assessed with rate comparison. Measures of sensitivity were highly dependent on the element of EPR data being investigated, while the positive predictive value was consistently high, indicating good validity. Prescribing data were generally of better quality than diagnostic or lifestyle data. Conclusion The lack of standardised methods for assessment of quality of data in electronic patient records makes it difficult to compare results between studies. Studies should present data quality measures with clear numerators, denominators, and confidence intervals. Ambiguous terms such as “accuracy” should be avoided unless precisely defined. PMID:12750210

  1. International Metadata Standards and Enterprise Data Quality Metadata Systems

    NASA Technical Reports Server (NTRS)

    Habermann, Ted

    2016-01-01

    Well-documented data quality is critical in situations where scientists and decision-makers need to combine multiple datasets from different disciplines and collection systems to address scientific questions or difficult decisions. Standardized data quality metadata could be very helpful in these situations. Many efforts at developing data quality standards falter because of the diversity of approaches to measuring and reporting data quality. The one size fits all paradigm does not generally work well in this situation. I will describe these and other capabilities of ISO 19157 with examples of how they are being used to describe data quality across the NASA EOS Enterprise and also compare these approaches with other standards.

  2. [Method for the quality assessment of data collection processes in epidemiological studies].

    PubMed

    Schöne, G; Damerow, S; Hölling, H; Houben, R; Gabrys, L

    2017-10-01

    For a quantitative evaluation of primary data collection processes in epidemiological surveys based on accompaniments and observations (in the field), there is no description of test criteria and methodologies in relevant literature and thus no known application in practice. Therefore, methods need to be developed and existing procedures adapted. The aim was to identify quality-relevant developments within quality dimensions by means of inspection points (quality indicators) during the process of data collection. As a result we seek to implement and establish a methodology for the assessment of overall survey quality supplementary to standardized data analyses. Monitors detect deviations from standard primary data collection during site visits by applying standardized checklists. Quantitative results - overall and for each dimension - are obtained by numerical calculation of quality indicators. Score results are categorized and color coded. This visual prioritization indicates necessity for intervention. The results obtained give clues regarding the current quality of data collection. This allows for the identification of such sections where interventions for quality improvement are needed. In addition, process quality development can be shown over time on an intercomparable basis. This methodology for the evaluation of data collection quality can identify deviations from norms, focalize quality analyses and help trace causes for significant deviations.

  3. The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk.

    PubMed

    Litman, Leib; Robinson, Jonathan; Rosenzweig, Cheskie

    2015-06-01

    In this study, we examined data quality among Amazon Mechanical Turk (MTurk) workers based in India, and the effect of monetary compensation on their data quality. Recent studies have shown that work quality is independent of compensation rates, and that compensation primarily affects the quantity but not the quality of work. However, the results of these studies were generally based on compensation rates below the minimum wage, and far below a level that was likely to play a practical role in the lives of workers. In this study, compensation rates were set around the minimum wage in India. To examine data quality, we developed the squared discrepancy procedure, which is a task-based quality assurance approach for survey tasks whose goal is to identify inattentive participants. We showed that data quality is directly affected by compensation rates for India-based participants. We also found that data were of a lesser quality among India-based than among US participants, even when optimal payment strategies were utilized. We additionally showed that the motivation of MTurk users has shifted, and that monetary compensation is now reported to be the primary reason for working on MTurk, among both US- and India-based workers. Overall, MTurk is a constantly evolving marketplace where multiple factors can contribute to data quality. High-quality survey data can be acquired on MTurk among India-based participants when an appropriate pay rate is provided and task-specific quality assurance procedures are utilized.

  4. [Quality measurement using administrative data in mandatory quality assurance].

    PubMed

    Heller, Günther; Szecsenyi, Joachim; Willms, Gerald; Broge, Björn

    2014-01-01

    For several years, the use of administrative data in mandatory quality measurement has been requested by several stakeholders in Germany. Main advantages of using administrative data include the reduction of documentary expenditures and the possibility to perform longitudinal quality analyses across different healthcare units. After a short introduction, a brief overview of the current use of administrative data for mandatory quality assurance as well as current developments is given, which will then be further exemplified by decubital ulcer prophylaxis. By using administrative data coding expenditures in this clinical area could be reduced by nine million data fields. At the same time the population analysed was expanded resulting in a more than tenfold increase in potentially quality-relevant events. Finally, perspectives, further developments, possibilities as well as limits of quality measurement with administrative data are discussed. Copyright © 2014. Published by Elsevier GmbH.

  5. Data-base development for water-quality modeling of the Patuxent River basin, Maryland

    USGS Publications Warehouse

    Fisher, G.T.; Summers, R.M.

    1987-01-01

    Procedures and rationale used to develop a data base and data management system for the Patuxent Watershed Nonpoint Source Water Quality Monitoring and Modeling Program of the Maryland Department of the Environment and the U.S. Geological Survey are described. A detailed data base and data management system has been developed to facilitate modeling of the watershed for water quality planning purposes; statistical analysis; plotting of meteorologic, hydrologic and water quality data; and geographic data analysis. The system is Maryland 's prototype for development of a basinwide water quality management program. A key step in the program is to build a calibrated and verified water quality model of the basin using the Hydrological Simulation Program--FORTRAN (HSPF) hydrologic model, which has been used extensively in large-scale basin modeling. The compilation of the substantial existing data base for preliminary calibration of the basin model, including meteorologic, hydrologic, and water quality data from federal and state data bases and a geographic information system containing digital land use and soils data is described. The data base development is significant in its application of an integrated, uniform approach to data base management and modeling. (Lantz-PTT)

  6. Preface to QoIS 2009

    NASA Astrophysics Data System (ADS)

    Comyn-Wattiau, Isabelle; Thalheim, Bernhard

    Quality assurance is a growing research domain within the Information Systems (IS) and Conceptual Modeling (CM) disciplines. Ongoing research on quality in IS and CM is highly diverse and encompasses theoretical aspects including quality definition and quality models, and practical/empirical aspects such as the development of methods, approaches and tools for quality measurement and improvement. Current research on quality also includes quality characteristics definitions, validation instruments, methodological and development approaches to quality assurance during software and information systems development, quality monitors, quality assurance during information systems development processes and practices, quality assurance both for data and (meta)schemata, quality support for information systems data import and export, quality of query answering, and cost/benefit analysis of quality assurance processes. Quality assurance is also depending on the application area and the specific requirements in applications such as health sector, logistics, public sector, financial sector, manufacturing, services, e-commerce, software, etc. Furthermore, quality assurance must also be supported for data aggregation, ETL processes, web content management and other multi-layered applications. Quality assurance is typically requiring resources and has therefore beside its benefits a computational and economical trade-off. It is therefore also based on compromising between the value of quality data and the cost for quality assurance.

  7. International Metadata Standards and Enterprise Data Quality Metadata Systems

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2016-12-01

    Well-documented data quality is critical in situations where scientists and decision-makers need to combine multiple datasets from different disciplines and collection systems to address scientific questions or difficult decisions. Standardized data quality metadata could be very helpful in these situations. Many efforts at developing data quality standards falter because of the diversity of approaches to measuring and reporting data quality. The "one size fits all" paradigm does not generally work well in this situation. The ISO data quality standard (ISO 19157) takes a different approach with the goal of systematically describing how data quality is measured rather than how it should be measured. It introduces the idea of standard data quality measures that can be well documented in a measure repository and used for consistently describing how data quality is measured across an enterprise. The standard includes recommendations for properties of these measures that include unique identifiers, references, illustrations and examples. Metadata records can reference these measures using the unique identifier and reuse them along with details (and references) that describe how the measure was applied to a particular dataset. A second important feature of ISO 19157 is the inclusion of citations to existing papers or reports that describe quality of a dataset. This capability allows users to find this information in a single location, i.e. the dataset metadata, rather than searching the web or other catalogs. I will describe these and other capabilities of ISO 19157 with examples of how they are being used to describe data quality across the NASA EOS Enterprise and also compare these approaches with other standards.

  8. 76 FR 56694 - Approval and Promulgation of Air Quality Implementation Plans; California; Determinations of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-14

    ... ambient air quality monitoring data for the period preceding the applicable attainment deadline. DATES... and certified monitoring data. A violation occurs when the ambient ozone air quality monitoring data... standard, generally based on air quality monitoring data from the 1987 through 1989 period (section 107(d...

  9. Statistical monitoring of data quality and consistency in the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial.

    PubMed

    Timmermans, Catherine; Doffagne, Erik; Venet, David; Desmet, Lieven; Legrand, Catherine; Burzykowski, Tomasz; Buyse, Marc

    2016-01-01

    Data quality may impact the outcome of clinical trials; hence, there is a need to implement quality control strategies for the data collected. Traditional approaches to quality control have primarily used source data verification during on-site monitoring visits, but these approaches are hugely expensive as well as ineffective. There is growing interest in central statistical monitoring (CSM) as an effective way to ensure data quality and consistency in multicenter clinical trials. CSM with SMART™ uses advanced statistical tools that help identify centers with atypical data patterns which might be the sign of an underlying quality issue. This approach was used to assess the quality and consistency of the data collected in the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, involving 1495 patients across 232 centers in Japan. In the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, very few atypical data patterns were found among the participating centers, and none of these patterns were deemed to be related to a quality issue that could significantly affect the outcome of the trial. CSM can be used to provide a check of the quality of the data from completed multicenter clinical trials before analysis, publication, and submission of the results to regulatory agencies. It can also form the basis of a risk-based monitoring strategy in ongoing multicenter trials. CSM aims at improving data quality in clinical trials while also reducing monitoring costs.

  10. Quality-Assurance Data for Routine Water Analyses by the U.S. Geological Survey Laboratory in Troy, New York-July 1997 through June 1999

    USGS Publications Warehouse

    Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.

    2006-01-01

    The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance/quality-control data for the time period addressed in this report were stored in the laboratory's SAS data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality- control samples analyzed from July 1997 through June 1999. Results for the quality-control samples for 18 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration and (or) low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, total aluminum, ammonium, calcium, chloride, specific conductance, and sulfate. The data from the potassium and sodium analytical procedures are insufficient for evaluation. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 11 of 13 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. Blank analysis results for chloride showed that 22 percent of blanks did not meet data-quality objectives and results for dissolved organic carbon showed that 31 percent of the blanks did not meet data-quality objectives. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 14 of the 18 analytes. At least 90 percent of the samples met data-quality objectives for all analytes except total aluminum (70 percent of samples met objectives) and potassium (83 percent of samples met objectives). Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality for most constituents over the time period. The P-sample (low-ionic-strength constituents) analysis had good ratings in two of these studies and a satisfactory rating in the third. The results of the T-sample (trace constituents) analysis indicated high data quality with good ratings in all three studies. The N-sample (nutrient constituents) studies had one each of excellent, good, and satisfactory ratings. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 80 percent of the samples met data-quality objectives for 9 of the 13 analytes; the exceptions were dissolved organic carbon, ammonium, chloride, and specific conductance. Data-quality objectives were not met for dissolved organic carbon in two NWRI studies, but all of the samples were within control limits for the last study. Data-quality objectives were not met in 41 percent of samples analyzed for ammonium, 25 percent of samples analyzed for chloride, and 30 percent of samples analyzed for specific conductance. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 84 percent of the samples analyzed for calcium, chloride, magnesium, pH, and potassium. Data-quality objectives were met by 73 percent of those analyzed for sulfate. The data-quality objective was not met for sodium. The data are insufficient for evaluation of the specific conductance results.

  11. Data quality assessment for comparative effectiveness research in distributed data networks

    PubMed Central

    Brown, Jeffrey; Kahn, Michael; Toh, Sengwee

    2015-01-01

    Background Electronic health information routinely collected during healthcare delivery and reimbursement can help address the need for evidence about the real-world effectiveness, safety, and quality of medical care. Often, distributed networks that combine information from multiple sources are needed to generate this real-world evidence. Objective We provide a set of field-tested best practices and a set of recommendations for data quality checking for comparative effectiveness research (CER) in distributed data networks. Methods Explore the requirements for data quality checking and describe data quality approaches undertaken by several existing multi-site networks. Results There are no established standards regarding how to evaluate the quality of electronic health data for CER within distributed networks. Data checks of increasing complexity are often employed, ranging from consistency with syntactic rules to evaluation of semantics and consistency within and across sites. Temporal trends within and across sites are widely used, as are checks of each data refresh or update. Rates of specific events and exposures by age group, sex, and month are also common. Discussion Secondary use of electronic health data for CER holds promise but is complex, especially in distributed data networks that incorporate periodic data refreshes. The viability of a learning health system is dependent on a robust understanding of the quality, validity, and optimal secondary uses of routinely collected electronic health data within distributed health data networks. Robust data quality checking can strengthen confidence in findings based on distributed data network. PMID:23793049

  12. Volunteer Macroinvertebrate Monitoring: Tensions Among Group Goals, Data Quality, and Outcomes

    NASA Astrophysics Data System (ADS)

    Nerbonne, Julia Frost; Nelson, Kristen C.

    2008-09-01

    Volunteer monitoring of natural resources is promoted for its ability to increase public awareness, to provide valuable knowledge, and to encourage policy change that promotes ecosystem health. We used the case of volunteer macroinvertebrate monitoring (VMM) in streams to investigate whether the quality of data collected is correlated with data use and organizers’ perception of whether they have achieved these outcomes. We examined the relation between site and group characteristics, data quality, data use, and perceived outcomes (education, social capital, and policy change). We found that group size and the degree to which citizen groups perform tasks on their own (rather than aided by professionals) positively correlated with the quality of data collected. Group size and number of years monitoring positively influenced whether a group used their data. While one might expect that groups committed to collecting good-quality data would be more likely to use it, there was no relation between data quality and data use, and no relation between data quality and perceived outcomes. More data use was, however, correlated with a group’s feeling of connection to a network of engaged citizens and professionals. While VMM may hold promise for bringing citizens and scientists together to work on joint conservation agendas, our data illustrate that data quality does not correlate with a volunteer group’s desire to use their data to promote regulatory change. Therefore, we encourage scientists and citizens alike to recognize this potential disconnect and strive to be explicit about the role of data in conservation efforts.

  13. Visualizing the quality of partially accruing data for use in decision making

    PubMed Central

    Eaton, Julia; Painter, Ian; Olson, Don; Lober, William B

    2015-01-01

    Secondary use of clinical health data for near real-time public health surveillance presents challenges surrounding its utility due to data quality issues. Data used for real-time surveillance must be timely, accurate and complete if it is to be useful; if incomplete data are used for surveillance, understanding the structure of the incompleteness is necessary. Such data are commonly aggregated due to privacy concerns. The Distribute project was a near real-time influenza-like-illness (ILI) surveillance system that relied on aggregated secondary clinical health data. The goal of this work is to disseminate the data quality tools developed to gain insight into the data quality problems associated with these data. These tools apply in general to any system where aggregate data are accrued over time and were created through the end-user-as-developer paradigm. Each tool was developed during the exploratory analysis to gain insight into structural aspects of data quality. Our key finding is that data quality of partially accruing data must be studied in the context of accrual lag—the difference between the time an event occurs and the time data for that event are received, i.e. the time at which data become available to the surveillance system. Our visualization methods therefore revolve around visualizing dimensions of data quality affected by accrual lag, in particular the tradeoff between timeliness and completion, and the effects of accrual lag on accuracy. Accounting for accrual lag in partially accruing data is necessary to avoid misleading or biased conclusions about trends in indicator values and data quality. PMID:27252794

  14. Visualizing the quality of partially accruing data for use in decision making.

    PubMed

    Eaton, Julia; Painter, Ian; Olson, Don; Lober, William B

    2015-01-01

    Secondary use of clinical health data for near real-time public health surveillance presents challenges surrounding its utility due to data quality issues. Data used for real-time surveillance must be timely, accurate and complete if it is to be useful; if incomplete data are used for surveillance, understanding the structure of the incompleteness is necessary. Such data are commonly aggregated due to privacy concerns. The Distribute project was a near real-time influenza-like-illness (ILI) surveillance system that relied on aggregated secondary clinical health data. The goal of this work is to disseminate the data quality tools developed to gain insight into the data quality problems associated with these data. These tools apply in general to any system where aggregate data are accrued over time and were created through the end-user-as-developer paradigm. Each tool was developed during the exploratory analysis to gain insight into structural aspects of data quality. Our key finding is that data quality of partially accruing data must be studied in the context of accrual lag-the difference between the time an event occurs and the time data for that event are received, i.e. the time at which data become available to the surveillance system. Our visualization methods therefore revolve around visualizing dimensions of data quality affected by accrual lag, in particular the tradeoff between timeliness and completion, and the effects of accrual lag on accuracy. Accounting for accrual lag in partially accruing data is necessary to avoid misleading or biased conclusions about trends in indicator values and data quality.

  15. Influence of volunteer and project characteristics on data quality of biological surveys.

    PubMed

    Lewandowski, Eva; Specht, Hannah

    2015-06-01

    Volunteer involvement in biological surveys is becoming common in conservation and ecology, prompting questions on the quality of data collected in such surveys. In a systematic review of the peer-reviewed literature on the quality of data collected by volunteers, we examined the characteristics of volunteers (e.g., age, prior knowledge) and projects (e.g., systematic vs. opportunistic monitoring schemes) that affect data quality with regards to standardization of sampling, accuracy and precision of data collection, spatial and temporal representation of data, and sample size. Most studies (70%, n = 71) focused on the act of data collection. The majority of assessments of volunteer characteristics (58%, n = 93) examined the effect of prior knowledge and experience on quality of the data collected, often by comparing volunteers with experts or professionals, who were usually assumed to collect higher quality data. However, when both groups' data were compared with the same accuracy standard, professional data were more accurate in only 4 of 7 cases. The few studies that measured precision of volunteer and professional data did not conclusively show that professional data were less variable than volunteer data. To improve data quality, studies recommended changes to survey protocols, volunteer training, statistical analyses, and project structure (e.g., volunteer recruitment and retention). © 2015, Society for Conservation Biology.

  16. How should the completeness and quality of curated nanomaterial data be evaluated?

    NASA Astrophysics Data System (ADS)

    Marchese Robinson, Richard L.; Lynch, Iseult; Peijnenburg, Willie; Rumble, John; Klaessig, Fred; Marquardt, Clarissa; Rauscher, Hubert; Puzyn, Tomasz; Purian, Ronit; Åberg, Christoffer; Karcher, Sandra; Vriens, Hanne; Hoet, Peter; Hoover, Mark D.; Hendren, Christine Ogilvie; Harper, Stacey L.

    2016-05-01

    Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated?Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated? Electronic supplementary information (ESI) available: (1) Detailed information regarding issues raised in the main text; (2) original survey responses. See DOI: 10.1039/c5nr08944a

  17. Using Clinical Data Standards to Measure Quality: A New Approach.

    PubMed

    D'Amore, John D; Li, Chun; McCrary, Laura; Niloff, Jonathan M; Sittig, Dean F; McCoy, Allison B; Wright, Adam

    2018-04-01

     Value-based payment for care requires the consistent, objective calculation of care quality. Previous initiatives to calculate ambulatory quality measures have relied on billing data or individual electronic health records (EHRs) to calculate and report performance. New methods for quality measure calculation promoted by federal regulations allow qualified clinical data registries to report quality outcomes based on data aggregated across facilities and EHRs using interoperability standards.  This research evaluates the use of clinical document interchange standards as the basis for quality measurement.  Using data on 1,100 patients from 11 ambulatory care facilities and 5 different EHRs, challenges to quality measurement are identified and addressed for 17 certified quality measures.  Iterative solutions were identified for 14 measures that improved patient inclusion and measure calculation accuracy. Findings validate this approach to improving measure accuracy while maintaining measure certification.  Organizations that report care quality should be aware of how identified issues affect quality measure selection and calculation. Quality measure authors should consider increasing real-world validation and the consistency of measure logic in respect to issues identified in this research. Schattauer GmbH Stuttgart.

  18. 75 FR 26685 - Approval and Promulgation of Implementation Plans and Designation of Areas for Air Quality...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-12

    ... devices necessary to collect data on ambient air quality, and programs to enforce the limitations. General...-hour ozone NAAQS, based on the most recent three years of complete, quality assured monitoring data... air quality monitoring data for the 3-year period must meet a data completeness requirement. The...

  19. A Data Quality Filter for PMU Measurements: Description, Experience, and Examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follum, James D.; Amidan, Brett G.

    Networks of phasor measurement units (PMUs) continue to grow, and along with them, the amount of data available for analysis. With so much data, it is impractical to identify and remove poor quality data manually. The data quality filter described in this paper was developed for use with the Data Integrity and Situation Awareness Tool (DISAT), which analyzes PMU data to identify anomalous system behavior. The filter operates based only on the information included in the data files, without supervisory control and data acquisition (SCADA) data, state estimator values, or system topology information. Measurements are compared to preselected thresholds tomore » determine if they are reliable. Along with the filter's description, examples of data quality issues from application of the filter to nine months of archived PMU data are provided. The paper is intended to aid the reader in recognizing and properly addressing data quality issues in PMU data.« less

  20. The Processing and Analysis of the Data from an Air Force Geophysics Laboratory Atmospheric Optical Measurement Station and the Maintenance of the Central Data Logger System.

    DTIC Science & Technology

    1984-02-15

    Directory ... ....... 42 19. Sample Interval Monitor Graph ................. 46 vi vii LIST OF FIGURES P age I. Example of DATA PROFILE Plot...Final Report, AFGL-TR-81-0130, ADA1? 7879 . PART I. RAW DATA TAPE PROCESSING PROCEDURES. 1.1 EXPERIMENT SAMPLING SEQUENCES Due to the changing...Data Quality 79 QQQQ Packed Eltro Data Quality 80 QQQQQQQQ Packed Luxmeter Data Quality 8i QQOQ Packed Night Path Data Quality P? QQQQQQQ Packed Vis

  1. Evaluating the impact of AMDAR data quality control in China on the short-range convection forecasts using the WRF model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Jiang, Qin; Zhang, Lei

    2016-04-01

    A quality control system for the Aircraft Meteorological Data Relay (AMDAR) data has been implemented in China. This system is an extension to the AMDAR quality control system used at the US National Centers for Environmental Prediction. We present a study in which the characteristics of each AMDAR data quality type were examined and the impact of the AMDAR data quality system on short-range convective weather forecasts using the WRF model was investigated. The main results obtained from this study are as follows. (1) The hourly rejection rate of AMDAR data during 2014 was 5.79%, and most of the rejections happened in Near Duplicate Check. (2) There was a significant diurnal variation for both quantity and quality of AMDAR data. Duplicated reports increased with the increase of data quantity, while suspicious and disorderly reports decreased with the increase of data quantity. (3) The characteristics of the data quality were different in each model layer, with the quality problems occurring mainly at the surface as well as at the height where the power or the flight mode of the aircraft underwent adjustment. (4) Assimilating the AMDAR data improved the forecast accuracy, particularly over the region where strong convection occurred. (5) Significant improvements made by assimilating AMDAR data were found after six hours into the model forecast. The conclusion from this study is that the newly implemented AMDAR data quality system can help improve the accuracy of short-range convection forecasts using the WRF model.

  2. [Infrastructure and contents of clinical data management plan].

    PubMed

    Shen, Tong; Xu, Lie-dong; Fu, Hai-jun; Liu, Yan; He, Jia; Chen, Ping-yan; Song, Yu-fei

    2015-11-01

    Establishment of quality management system (QMS) plays a critical role in the clinical data management (CDM). The objectives of CDM are to ensure the quality and integrity of the trial data. Thus, every stage or element that may impact the quality outcomes of clinical studies should be in the controlled manner, which is referred to the full life cycle of CDM associated with the data collection, handling and statistical analysis of trial data. Based on the QMS, this paper provides consensus on how to develop a compliant clinical data management plan (CDMP). According to the essential requirements of the CDM, the CDMP should encompass each process of data collection, data capture and cleaning, medical coding, data verification and reconciliation, database monitoring and management, external data transmission and integration, data documentation and data quality assurance and so on. Creating and following up data management plan in each designed data management steps, dynamically record systems used, actions taken, parties involved will build and confirm regulated data management processes, standard operational procedures and effective quality metrics in all data management activities. CDMP is one of most important data management documents that is the solid foundation for clinical data quality.

  3. How to improve vital sign data quality for use in clinical decision support systems? A qualitative study in nine Swedish emergency departments.

    PubMed

    Skyttberg, Niclas; Vicente, Joana; Chen, Rong; Blomqvist, Hans; Koch, Sabine

    2016-06-04

    Vital sign data are important for clinical decision making in emergency care. Clinical Decision Support Systems (CDSS) have been advocated to increase patient safety and quality of care. However, the efficiency of CDSS depends on the quality of the underlying vital sign data. Therefore, possible factors affecting vital sign data quality need to be understood. This study aims to explore the factors affecting vital sign data quality in Swedish emergency departments and to determine in how far clinicians perceive vital sign data to be fit for use in clinical decision support systems. A further aim of the study is to provide recommendations on how to improve vital sign data quality in emergency departments. Semi-structured interviews were conducted with sixteen physicians and nurses from nine hospitals and vital sign documentation templates were collected and analysed. Follow-up interviews and process observations were done at three of the hospitals to verify the results. Content analysis with constant comparison of the data was used to analyse and categorize the collected data. Factors related to care process and information technology were perceived to affect vital sign data quality. Despite electronic health records (EHRs) being available in all hospitals, these were not always used for vital sign documentation. Only four out of nine sites had a completely digitalized vital sign documentation flow and paper-based triage records were perceived to provide a better mobile workflow support than EHRs. Observed documentation practices resulted in low currency, completeness, and interoperability of the vital signs. To improve vital sign data quality, we propose to standardize the care process, improve the digital documentation support, provide workflow support, ensure interoperability and perform quality control. Vital sign data quality in Swedish emergency departments is currently not fit for use by CDSS. To address both technical and organisational challenges, we propose five steps for vital sign data quality improvement to be implemented in emergency care settings.

  4. Using the Cross-Correlation Function to Evaluate the Quality of Eddy-Covariance Data

    NASA Astrophysics Data System (ADS)

    Qi, Yongfeng; Shang, Xiaodong; Chen, Guiying; Gao, Zhiqiu; Bi, Xueyan

    2015-11-01

    A cross-correlation test is proposed for evaluating the quality of 30-min eddy-covariance data. Cross-correlation as a function of time lag is computed for vertical velocity paired with temperature, humidity, and carbon dioxide concentration. High quality data have a dominant peak at zero time lag and approach zero within a time lag of 20 s. Poor quality data have erratic cross-correlation functions, which indicates that the eddy flux may no longer represent the energy and mass exchange between the atmospheric surface layer and the canopy, and such data should be rejected in post-data analyses. Eddy-covariance data over grassland in July 2004 are used to evaluate the proposed test. The results show that 17, 29, and 36 % of the available data should be rejected because of poor quality measurements of sensible heat, latent heat, and CO2 fluxes, respectively. The rejected data mainly occurred on calm nights and day/night transitions when the atmospheric surface layer became stable or neutrally stratified. We found no friction velocity (u_*) threshold below which all data should be rejected, a test that many other studies have implemented for rejecting questionable data. We instead found that some data with low u_* were reliable, whereas other data with higher u_* were not. The poor quality measurements collected under less than ideal conditions were replaced by using the mean diurnal variation gap-filling method. The correction for poor quality data shifted the daily average CO2 flux by +0.34 g C m^{-2} day^{-1}. After applying the quality-control test, the eddy CO2 fluxes did not display a clear dependence on u_*. The results suggest that the cross-correlation test is a potentially valuable step in evaluating the quality of eddy-covariance data.

  5. Data Quality Parameters and Web Services Facilitate User Access to Research-Ready Seismic Data

    NASA Astrophysics Data System (ADS)

    Trabant, C. M.; Templeton, M. E.; Van Fossen, M.; Weertman, B.; Ahern, T. K.; Casey, R. E.; Keyson, L.; Sharer, G.

    2016-12-01

    IRIS Data Services has the mission of providing efficient access to a wide variety of seismic and related geoscience data to the user community. With our vast archive of freely available data, we recognize that there is a constant challenge to provide data to scientists and students that are of a consistently useful level of quality. To address this issue, we began by undertaking a comprehensive survey of the data and generating metrics measurements that provide estimates of data quality. These measurements can inform the scientist of the level of suitability of a given set of data for their scientific investigation. They also serve as a quality assurance check for network operators, who can act on this information to improve their current recording or mitigate issues with already recorded data and metadata. Following this effort, IRIS Data Services is moving forward to focus on providing tools for the scientist that make it easier to access data of a quality and characteristic that suits their investigation. Data that fulfill this criterion are termed "research-ready". In addition to filtering data by type, geographic location, proximity to events, and specific time ranges, we will offer the ability to filter data based on specific quality assessments. These include signal-to-noise ratio measurements, data continuity, timing quality, absence of channel cross-talk, and potentially many other factors. Our goal is to ensure that the user receives only the data that meets their specifications and will not require extensive review and culling after delivery. We will present the latest developments of the MUSTANG automated data quality system and introduce the Research-Ready Data Sets (RRDS) service. Together these two technologies serve as a data quality assurance ecosystem that will provide benefit to the scientific community by aiding efforts to readily find appropriate and suitable data for use in any number of objectives.

  6. Engineering and Design: Chemical Data Quality Management for Hazardous, Toxic, Radioactive Waste Remedial Activities

    DTIC Science & Technology

    This regulation prescribes Chemical Data Quality Management (CDQM) responsibilities and procedures for projects involving hazardous, toxic and/or radioactive waste (HTRW) materials. Its purpose is to assure that the analytical data meet project data quality objectives. This is the umbrella regulation that defines CDQM activities and integrates all of the other U.S. Army Corps of Engineers (USACE) guidance on environmental data quality management .

  7. Driving photomask supplier quality through automation

    NASA Astrophysics Data System (ADS)

    Russell, Drew; Espenscheid, Andrew

    2007-10-01

    In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.

  8. Systematic monitoring and evaluation of M7 scanner performance and data quality

    NASA Technical Reports Server (NTRS)

    Stewart, S.; Christenson, D.; Larsen, L.

    1974-01-01

    An investigation was conducted to provide the information required to maintain data quality of the Michigan M7 Multispectral scanner by systematic checks on specific system performance characteristics. Data processing techniques which use calibration data gathered routinely every mission have been developed to assess current data quality. Significant changes from past data quality are thus identified and attempts made to discover their causes. Procedures for systematic monitoring of scanner data quality are discussed. In the solar reflective region, calculations of Noise Equivalent Change in Radiance on a permission basis are compared to theoretical tape-recorder limits to provide an estimate of overall scanner performance. M7 signal/noise characteristics are examined.

  9. Application of advanced data collection and quality assurance methods in open prospective study - a case study of PONS project.

    PubMed

    Wawrzyniak, Zbigniew M; Paczesny, Daniel; Mańczuk, Marta; Zatoński, Witold A

    2011-01-01

    Large-scale epidemiologic studies can assess health indicators differentiating social groups and important health outcomes of the incidence and mortality of cancer, cardiovascular disease, and others, to establish a solid knowledgebase for the prevention management of premature morbidity and mortality causes. This study presents new advanced methods of data collection and data management systems with current data quality control and security to ensure high quality data assessment of health indicators in the large epidemiologic PONS study (The Polish-Norwegian Study). The material for experiment is the data management design of the large-scale population study in Poland (PONS) and the managed processes are applied into establishing a high quality and solid knowledge. The functional requirements of the PONS study data collection, supported by the advanced IT web-based methods, resulted in medical data of a high quality, data security, with quality data assessment, control process and evolution monitoring are fulfilled and shared by the IT system. Data from disparate and deployed sources of information are integrated into databases via software interfaces, and archived by a multi task secure server. The practical and implemented solution of modern advanced database technologies and remote software/hardware structure successfully supports the research of the big PONS study project. Development and implementation of follow-up control of the consistency and quality of data analysis and the processes of the PONS sub-databases have excellent measurement properties of data consistency of more than 99%. The project itself, by tailored hardware/software application, shows the positive impact of Quality Assurance (QA) on the quality of outcomes analysis results, effective data management within a shorter time. This efficiency ensures the quality of the epidemiological data and indicators of health by the elimination of common errors of research questionnaires and medical measurements.

  10. Research on Holographic Evaluation of Service Quality in Power Data Network

    NASA Astrophysics Data System (ADS)

    Wei, Chen; Jing, Tao; Ji, Yutong

    2018-01-01

    With the rapid development of power data network, the continuous development of the Power data application service system, more and more service systems are being put into operation. Following this, the higher requirements for network quality and service quality are raised, in the actual process for the network operation and maintenance. This paper describes the electricity network and data network services status. A holographic assessment model was presented to achieve a comprehensive intelligence assessment on the power data network and quality of service in the operation and maintenance on the power data network. This evaluation method avoids the problems caused by traditional means which performs a single assessment of network performance quality. This intelligent Evaluation method can improve the efficiency of network operation and maintenance guarantee the quality of real-time service in the power data network..

  11. How should the completeness and quality of curated nanomaterial data be evaluated?†

    PubMed Central

    Marchese Robinson, Richard L.; Lynch, Iseult; Peijnenburg, Willie; Rumble, John; Klaessig, Fred; Marquardt, Clarissa; Rauscher, Hubert; Puzyn, Tomasz; Purian, Ronit; Åberg, Christoffer; Karcher, Sandra; Vriens, Hanne; Hoet, Peter; Hoover, Mark D.; Hendren, Christine Ogilvie; Harper, Stacey L.

    2016-01-01

    Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials’ behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated? PMID:27143028

  12. Current progress on GSN data quality evaluation

    NASA Astrophysics Data System (ADS)

    Davis, J. P.; Gee, L. S.; Anderson, K. R.; Ahern, T. K.

    2012-12-01

    We discuss ongoing work to assess and improve the quality of data collected from instruments deployed at the 150+ stations of the Global Seismographic Network (GSN). The USGS and the IRIS Consortium are coordinating efforts to emphasize data quality following completion of the major installation phase of the GSN and recapitalization of the network's data acquisition systems, ancillary equipment and many of the secondary seismic sensors. We highlight here procedures adopted by the network's operators, the USGS' Albuquerque Seismological Laboratory (ASL) and UCSD's Project IDA, to ensure that the quality of the waveforms collected is maximized, that published metadata accurately reflect the instrument response of the data acquisitions systems, and that the data users are informed of the status of the GSN data quality. Additional details can be found at the GSN Quality webpage (www.iris.edu/hq/programs/gsn/quality). The GSN network operation teams meet frequently to share information and techniques. While custom software developed by each network operator to identify and track known problems remains important, recent efforts are providing new resources and tools to evaluate waveform quality, including analysis provided by the Lamont Waveform Quality Center (www.ldeo.columbia.edu/~ekstrom/Projects/WQC.html) and synthetic seismograms made available through Princeton University's Near Real Time Global Seismicity Portal ( http://global.shakemovie.princeton.edu/home.jsp ) and developments such as the IRIS DMS's MUSTANG and the ASL's Data Quality Analyzer. We conclude with the concept of station certification, a comprehensive overview of a station's performance that we have developed to communicate to data users the state of data- and metadata quality. As progress is made to verify the response and performance of existing systems as well as analysis of past calibration signals and waveform data, we will update information on the GSN web portals to apprise users of the condition of each GSN station's data.

  13. Barriers to data quality resulting from the process of coding health information to administrative data: a qualitative study.

    PubMed

    Lucyk, Kelsey; Tang, Karen; Quan, Hude

    2017-11-22

    Administrative health data are increasingly used for research and surveillance to inform decision-making because of its large sample sizes, geographic coverage, comprehensivity, and possibility for longitudinal follow-up. Within Canadian provinces, individuals are assigned unique personal health numbers that allow for linkage of administrative health records in that jurisdiction. It is therefore necessary to ensure that these data are of high quality, and that chart information is accurately coded to meet this end. Our objective is to explore the potential barriers that exist for high quality data coding through qualitative inquiry into the roles and responsibilities of medical chart coders. We conducted semi-structured interviews with 28 medical chart coders from Alberta, Canada. We used thematic analysis and open-coded each transcript to understand the process of administrative health data generation and identify barriers to its quality. The process of generating administrative health data is highly complex and involves a diverse workforce. As such, there are multiple points in this process that introduce challenges for high quality data. For coders, the main barriers to data quality occurred around chart documentation, variability in the interpretation of chart information, and high quota expectations. This study illustrates the complex nature of barriers to high quality coding, in the context of administrative data generation. The findings from this study may be of use to data users, researchers, and decision-makers who wish to better understand the limitations of their data or pursue interventions to improve data quality.

  14. Improving data quality across 3 sub-Saharan African countries using the Consolidated Framework for Implementation Research (CFIR): results from the African Health Initiative.

    PubMed

    Gimbel, Sarah; Mwanza, Moses; Nisingizwe, Marie Paul; Michel, Cathy; Hirschhorn, Lisa

    2017-12-21

    High-quality data are critical to inform, monitor and manage health programs. Over the seven-year African Health Initiative of the Doris Duke Charitable Foundation, three of the five Population Health Implementation and Training (PHIT) partnership projects in Mozambique, Rwanda, and Zambia introduced strategies to improve the quality and evaluation of routinely-collected data at the primary health care level, and stimulate its use in evidence-based decision-making. Using the Consolidated Framework for Implementation Research (CFIR) as a guide, this paper: 1) describes and categorizes data quality assessment and improvement activities of the projects, and 2) identifies core intervention components and implementation strategy adaptations introduced to improve data quality in each setting. The CFIR was adapted through a qualitative theme reduction process involving discussions with key informants from each project, who identified two domains and ten constructs most relevant to the study aim of describing and comparing each country's data quality assessment approach and implementation process. Data were collected on each project's data quality improvement strategies, activities implemented, and results via a semi-structured questionnaire with closed and open-ended items administered to health management information systems leads in each country, with complementary data abstraction from project reports. Across the three projects, intervention components that aligned with user priorities and government systems were perceived to be relatively advantageous, and more readily adapted and adopted. Activities that both assessed and improved data quality (including data quality assessments, mentorship and supportive supervision, establishment and/or strengthening of electronic medical record systems), received higher ranking scores from respondents. Our findings suggest that, at a minimum, successful data quality improvement efforts should include routine audits linked to ongoing, on-the-job mentoring at the point of service. This pairing of interventions engages health workers in data collection, cleaning, and analysis of real-world data, and thus provides important skills building with on-site mentoring. The effect of these core components is strengthened by performance review meetings that unify multiple health system levels (provincial, district, facility, and community) to assess data quality, highlight areas of weakness, and plan improvements.

  15. Quality control in public participation assessments of water quality: the OPAL Water Survey.

    PubMed

    Rose, N L; Turner, S D; Goldsmith, B; Gosling, L; Davidson, T A

    2016-07-22

    Public participation in scientific data collection is a rapidly expanding field. In water quality surveys, the involvement of the public, usually as trained volunteers, generally includes the identification of aquatic invertebrates to a broad taxonomic level. However, quality assurance is often not addressed and remains a key concern for the acceptance of publicly-generated water quality data. The Open Air Laboratories (OPAL) Water Survey, launched in May 2010, aimed to encourage interest and participation in water science by developing a 'low-barrier-to-entry' water quality survey. During 2010, over 3000 participant-selected lakes and ponds were surveyed making this the largest public participation lake and pond survey undertaken to date in the UK. But the OPAL approach of using untrained volunteers and largely anonymous data submission exacerbates quality control concerns. A number of approaches were used in order to address data quality issues including: sensitivity analysis to determine differences due to operator, sampling effort and duration; direct comparisons of identification between participants and experienced scientists; the use of a self-assessment identification quiz; the use of multiple participant surveys to assess data variability at single sites over short periods of time; comparison of survey techniques with other measurement variables and with other metrics generally considered more accurate. These quality control approaches were then used to screen the OPAL Water Survey data to generate a more robust dataset. The OPAL Water Survey results provide a regional and national assessment of water quality as well as a first national picture of water clarity (as suspended solids concentrations). Less than 10 % of lakes and ponds surveyed were 'poor' quality while 26.8 % were in the highest water quality band. It is likely that there will always be a question mark over untrained volunteer generated data simply because quality assurance is uncertain, regardless of any post hoc data analyses. Quality control at all stages, from survey design, identification tests, data submission and interpretation can all increase confidence such that useful data can be generated by public participants.

  16. Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.

    PubMed

    Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si

    2017-07-01

    Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.

  17. Sources of Virginia meteorological and air quality data for use in highway air quality analysis with comments on their usefulness.

    DOT National Transportation Integrated Search

    1975-01-01

    The preparation of accurate air quality analysis portions of highway environmental impact statements requires valid meteorological and air quality data. These data are needed, in part, to determine the regional and local wind patterns on which pollut...

  18. 42 CFR 482.21 - Condition of participation: Quality assessment and performance improvement program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... quality improvement and patient safety, including the reduction of medical errors, is defined, implemented... address priorities for improved quality of care and patient safety; and that all improvement actions are... incorporate quality indicator data including patient care data, and other relevant data, for example...

  19. 42 CFR 482.21 - Condition of participation: Quality assessment and performance improvement program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... quality improvement and patient safety, including the reduction of medical errors, is defined, implemented... address priorities for improved quality of care and patient safety; and that all improvement actions are... incorporate quality indicator data including patient care data, and other relevant data, for example...

  20. 42 CFR 482.21 - Condition of participation: Quality assessment and performance improvement program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... quality improvement and patient safety, including the reduction of medical errors, is defined, implemented... address priorities for improved quality of care and patient safety; and that all improvement actions are... incorporate quality indicator data including patient care data, and other relevant data, for example...

  1. Watershed Reliability, Resilience And Vulnerability Analysis Under Uncertainty Using Water Quality Data

    EPA Science Inventory

    A method for assessment of watershed health is developed by employing measures of reliability, resilience and vulnerability (R-R-V) using stream water quality data. Observed water quality data are usually sparse, so that a water quality time series is often reconstructed using s...

  2. Assessing Quality of Data Standards: Framework and Illustration Using XBRL GAAP Taxonomy

    NASA Astrophysics Data System (ADS)

    Zhu, Hongwei; Wu, Harris

    The primary purpose of data standards or metadata schemas is to improve the interoperability of data created by multiple standard users. Given the high cost of developing data standards, it is desirable to assess the quality of data standards. We develop a set of metrics and a framework for assessing data standard quality. The metrics include completeness and relevancy. Standard quality can also be indirectly measured by assessing interoperability of data instances. We evaluate the framework using data from the financial sector: the XBRL (eXtensible Business Reporting Language) GAAP (Generally Accepted Accounting Principles) taxonomy and US Securities and Exchange Commission (SEC) filings produced using the taxonomy by approximately 500 companies. The results show that the framework is useful and effective. Our analysis also reveals quality issues of the GAAP taxonomy and provides useful feedback to taxonomy users. The SEC has mandated that all publicly listed companies must submit their filings using XBRL. Our findings are timely and have practical implications that will ultimately help improve the quality of financial data.

  3. Measuring Quality of Healthcare Outcomes in Type 2 Diabetes from Routine Data: a Seven-nation Survey Conducted by the IMIA Primary Health Care Working Group.

    PubMed

    Hinton, W; Liyanage, H; McGovern, A; Liaw, S-T; Kuziemsky, C; Munro, N; de Lusignan, S

    2017-08-01

    Background: The Institute of Medicine framework defines six dimensions of quality for healthcare systems: (1) safety, (2) effectiveness, (3) patient centeredness, (4) timeliness of care, (5) efficiency, and (6) equity. Large health datasets provide an opportunity to assess quality in these areas. Objective: To perform an international comparison of the measurability of the delivery of these aims, in people with type 2 diabetes mellitus (T2DM) from large datasets. Method: We conducted a survey to assess healthcare outcomes data quality of existing databases and disseminated this through professional networks. We examined the data sources used to collect the data, frequency of data uploads, and data types used for identifying people with T2DM. We compared data completeness across the six areas of healthcare quality, using selected measures pertinent to T2DM management. Results: We received 14 responses from seven countries (Australia, Canada, Italy, the Netherlands, Norway, Portugal, Turkey and the UK). Most databases reported frequent data uploads and would be capable of near real time analysis of healthcare quality.The majority of recorded data related to safety (particularly medication adverse events) and treatment efficacy (glycaemic control and microvascular disease). Data potentially measuring equity was less well recorded. Recording levels were lowest for patient-centred care, timeliness of care, and system efficiency, with the majority of databases containing no data in these areas. Databases using primary care sources had higher data quality across all areas measured. Conclusion: Data quality could be improved particularly in the areas of patient-centred care, timeliness, and efficiency. Primary care derived datasets may be most suited to healthcare quality assessment. Georg Thieme Verlag KG Stuttgart.

  4. Data quality through a web-based QA/QC system: implementation for atmospheric mercury data from the global mercury observation system.

    PubMed

    D'Amore, Francesco; Bencardino, Mariantonia; Cinnirella, Sergio; Sprovieri, Francesca; Pirrone, Nicola

    2015-08-01

    The overall goal of the on-going Global Mercury Observation System (GMOS) project is to develop a coordinated global monitoring network for mercury, including ground-based, high altitude and sea level stations. In order to ensure data reliability and comparability, a significant effort has been made to implement a centralized system, which is designed to quality assure and quality control atmospheric mercury datasets. This system, GMOS-Data Quality Management (G-DQM), uses a web-based approach with real-time adaptive monitoring procedures aimed at preventing the production of poor-quality data. G-DQM is plugged on a cyberinfrastructure and deployed as a service. Atmospheric mercury datasets, produced during the first-three years of the GMOS project, are used as the input to demonstrate the application of the G-DQM and how it identifies a number of key issues concerning data quality. The major issues influencing data quality are presented and discussed for the GMOS stations under study. Atmospheric mercury data collected at the Longobucco (Italy) station is used as a detailed case study.

  5. Sports Injury Surveillance Systems: A Review of Methods and Data Quality.

    PubMed

    Ekegren, Christina L; Gabbe, Belinda J; Finch, Caroline F

    2016-01-01

    Data from sports injury surveillance systems are a prerequisite to the development and evaluation of injury prevention strategies. This review aimed to identify ongoing sports injury surveillance systems and determine whether there are gaps in our understanding of injuries in certain sport settings. A secondary aim was to determine which of the included surveillance systems have evaluated the quality of their data, a key factor in determining their usefulness. A systematic search was carried out to identify (1) publications presenting methodological details of sports injury surveillance systems within clubs and organisations; and (2) publications describing quality evaluations and the quality of data from these systems. Data extracted included methodological details of the surveillance systems, methods used to evaluate data quality, and results of these evaluations. Following literature search and review, a total of 15 sports injury surveillance systems were identified. Data relevant to each aim were summarised descriptively. Most systems were found to exist within professional and elite sports. Publications concerning data quality were identified for seven (47%) systems. Validation of system data through comparison with alternate sources has been undertaken for only four systems (27%). This review identified a shortage of ongoing injury surveillance data from amateur and community sport settings and limited information about the quality of data in professional and elite settings. More surveillance systems are needed across a range of sport settings, as are standards for data quality reporting. These efforts will enable better monitoring of sports injury trends and the development of sports safety strategies.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holladay, S.K.; Anderson, H.M.; Benson, S.B.

    Quality assurance (QA) objectives for Phase 2 were that (1) scientific data generated would withstand scientific and legal scrutiny; (2) data would be gathered using appropriate procedures for sample collection, sample handling and security, chain of custody, laboratory analyses, and data reporting; (3) data would be of known precision and accuracy; and (4) data would meet data quality objectives defined in the Phase 2 Sampling and Analysis Plan. A review of the QA systems and quality control (QC) data associated with the Phase 2 investigation is presented to evaluate whether the data were of sufficient quality to satisfy Phase 2more » objectives. The data quality indicators of precision, accuracy, representativeness, comparability, completeness, and sensitivity were evaluated to determine any limitations associated with the data. Data were flagged with qualifiers that were associated with appropriate reason codes and documentation relating the qualifiers to the reviewer of the data. These qualifiers were then consolidated into an overall final qualifier to represent the quality of the data to the end user. In summary, reproducible, precise, and accurate measurements consistent with CRRI objectives and the limitations of the sampling and analytical procedures used were obtained for the data collected in support of the Phase 2 Remedial Investigation.« less

  7. [The importance of data].

    PubMed

    Planas, M; Rodríguez, T; Lecha, M

    2004-01-01

    Decisions have to be made about what data on patient characteristics and processes and outcome need to be collected, and standard definitions of these data items need to be developed to identify data quality concerns as promptly as possible and to establish ways to improve data quality. The usefulness of any clinical database depends strongly on the quality of the collected data. If the data quality is poor, the results of studies using the database might be biased and unreliable. Furthermore, if the quality of the database has not been verified, the results might be given little credence, especially if they are unwelcome or unexpected. To assure the quality of clinical database is essential the clear definition of the uses to which the database is going to be put; the database should to be developed that is comprehensive in terms of its usefulness but limited in its size.

  8. Applications of MIDAS regression in analysing trends in water quality

    NASA Astrophysics Data System (ADS)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  9. Quality-assurance and data management plan for groundwater activities by the U.S. Geological Survey in Kansas, 2014

    USGS Publications Warehouse

    Putnam, James E.; Hansen, Cristi V.

    2014-01-01

    As the Nation’s principle earth-science information agency, the U.S. Geological Survey (USGS) is depended on to collect data of the highest quality. This document is a quality-assurance plan for groundwater activities (GWQAP) of the Kansas Water Science Center. The purpose of this GWQAP is to establish a minimum set of guidelines and practices to be used by the Kansas Water Science Center to ensure quality in groundwater activities. Included within these practices are the assignment of responsibilities for implementing quality-assurance activities in the Kansas Water Science Center and establishment of review procedures needed to ensure the technical quality and reliability of the groundwater products. In addition, this GWQAP is intended to complement quality-assurance plans for surface-water and water-quality activities and similar plans for the Kansas Water Science Center and general project activities throughout the USGS. This document provides the framework for collecting, analyzing, and reporting groundwater data that are quality assured and quality controlled. This GWQAP presents policies directing the collection, processing, analysis, storage, review, and publication of groundwater data. In addition, policies related to organizational responsibilities, training, project planning, and safety are presented. These policies and practices pertain to all groundwater activities conducted by the Kansas Water Science Center, including data-collection programs, interpretive and research projects. This report also includes the data management plan that describes the progression of data management from data collection to archiving and publication.

  10. Implementation of a hospital-based quality assessment program for rectal cancer.

    PubMed

    Hendren, Samantha; McKeown, Ellen; Morris, Arden M; Wong, Sandra L; Oerline, Mary; Poe, Lyndia; Campbell, Darrell A; Birkmeyer, Nancy J

    2014-05-01

    Quality improvement programs in Europe have had a markedly beneficial effect on the processes and outcomes of rectal cancer care. The quality of rectal cancer care in the United States is not as well understood, and scalable quality improvement programs have not been developed. The purpose of this article is to describe the implementation of a hospital-based quality assessment program for rectal cancer, targeting both community and academic hospitals. We recruited 10 hospitals from a surgical quality improvement organization. Nurse reviewers were trained to abstract rectal cancer data from hospital medical records, and abstracts were assessed for accuracy. We conducted two surveys to assess the training program and limitations of the data abstraction. We validated data completeness and accuracy by comparing hospital medical record and tumor registry data. Nine of 10 hospitals successfully performed abstractions with ≥ 90% accuracy. Experienced nurse reviewers were challenged by the technical details in operative and pathology reports. Although most variables had less than 10% missing data, outpatient testing information was lacking from some hospitals' inpatient records. This implementation project yielded a final quality assessment program consisting of 20 medical records variables and 11 tumor registry variables. An innovative program linking tumor registry data to quality-improvement data for rectal cancer quality assessment was successfully implemented in 10 hospitals. This data platform and training program can serve as a template for other organizations that are interested in assessing and improving the quality of rectal cancer care. Copyright © 2014 by American Society of Clinical Oncology.

  11. Assessing Subjectivity in Sensor Data Post Processing via a Controlled Experiment

    NASA Astrophysics Data System (ADS)

    Jones, A. S.; Horsburgh, J. S.; Eiriksson, D.

    2017-12-01

    Environmental data collected by in situ sensors must be reviewed to verify validity, and conducting quality control often requires making edits in post processing to generate approved datasets. This process involves decisions by technicians, data managers, or data users on how to handle problematic data. Options include: removing data from a series, retaining data with annotations, and altering data based on algorithms related to adjacent data points or the patterns of data at other locations or of other variables. Ideally, given the same dataset and the same quality control guidelines, multiple data quality control technicians would make the same decisions in data post processing. However, despite the development and implementation of guidelines aimed to ensure consistent quality control procedures, we have faced ambiguity when performing post processing, and we have noticed inconsistencies in the practices of individuals performing quality control post processing. Technicians with the same level of training and using the same input datasets may produce different results, affecting the overall quality and comparability of finished data products. Different results may also be produced by technicians that do not have the same level of training. In order to assess the effect of subjective decision making by the individual technician on the end data product, we designed an experiment where multiple users performed quality control post processing on the same datasets using a consistent set of guidelines, field notes, and tools. We also assessed the effect of technician experience and training by conducting the same procedures with a group of novices unfamiliar with the data and the quality control process and compared their results to those generated by a group of more experienced technicians. In this presentation, we report our observations of the degree of subjectivity in sensor data post processing, assessing and quantifying the impacts of individual technician as well as technician experience on quality controlled data products.

  12. Information management for aged care provision in Australia: development of an aged care minimum dataset and strategies to improve quality and continuity of care.

    PubMed

    Davis, Jenny; Morgans, Amee; Burgess, Stephen

    2016-04-01

    Efficient information systems support the provision of multi-disciplinary aged care and a variety of organisational purposes, including quality, funding, communication and continuity of care. Agreed minimum data sets enable accurate communication across multiple care settings. However, in aged care multiple and poorly integrated data collection frameworks are commonly used for client assessment, government reporting and funding purposes. To determine key information needs in aged care settings to improve information quality, information transfer, safety, quality and continuity of care to meet the complex needs of aged care clients. Modified Delphi methods involving five stages were employed by one aged care provider in Victoria, Australia, to establish stakeholder consensus for a derived minimum data set and address barriers to data quality. Eleven different aged care programs were identified; with five related data dictionaries, three minimum data sets, five program standards or quality frameworks. The remaining data collection frameworks related to diseases classification, funding, service activity reporting, and statistical standards and classifications. A total of 170 different data items collected across seven internal information systems were consolidated to a derived set of 60 core data items and aligned with nationally consistent data collection frameworks. Barriers to data quality related to inconsistencies in data items, staff knowledge, workflow, system access and configuration. The development an internal aged care minimum data set highlighted the critical role of primary data quality in the upstream and downstream use of client information; and presents a platform to build national consistency across the sector.

  13. From Field Notes to Data Portal - A Scalable Data QA/QC Framework for Tower Networks: Progress and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Sturtevant, C.; Hackley, S.; Lee, R.; Holling, G.; Bonarrigo, S.

    2017-12-01

    Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. Data quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from humans or the natural environment. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process heavily relying on visual inspection of data. In addition, notes of measurement interference are often recorded on paper without an explicit pathway to data flagging. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. We present a scalable QA/QC framework in development for NEON that combines the efficiency and standardization of automated checks with the power and flexibility of human review. This framework includes fast-response monitoring of sensor health, a mobile application for electronically recording maintenance activities, traditional point-based automated quality flagging, and continuous monitoring of quality outcomes and longer-term holistic evaluations. This framework maintains the traceability of quality information along the entirety of the data generation pipeline, and explicitly links field reports of measurement interference to quality flagging. Preliminary results show that data quality can be effectively monitored and managed for a multitude of sites with a small group of QA/QC staff. Several components of this framework are open-source, including a R-Shiny application for efficiently monitoring, synthesizing, and investigating data quality issues.

  14. A systematic approach for evaluating and scoring human data.

    PubMed

    Money, Chris D; Tomenson, John A; Penman, Michael G; Boogaard, Peter J; Jeffrey Lewis, R

    2013-07-01

    An approach is described for how the quality of human data can be systematically assessed and categorised. The approach mirrors the animal data quality considerations set out by Klimisch et al., in order that human data quality can be addressed in a complementary manner and to help facilitate transparent (and repeatable) weight of evidence comparisons. Definitions are proposed for the quality and adequacy of data. Quality is differentiated into four categories. A description of how the scheme can be used for evaluating data reliability, especially for use when contributing entries to the IUCLID database, is shown. A discussion of how the criteria might also be used when determining overall data relevance is included. The approach is intended to help harmonise human data evaluation processes worldwide. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Data quality can make or break a research infrastructure

    NASA Astrophysics Data System (ADS)

    Pastorello, G.; Gunter, D.; Chu, H.; Christianson, D. S.; Trotta, C.; Canfora, E.; Faybishenko, B.; Cheah, Y. W.; Beekwilder, N.; Chan, S.; Dengel, S.; Keenan, T. F.; O'Brien, F.; Elbashandy, A.; Poindexter, C.; Humphrey, M.; Papale, D.; Agarwal, D.

    2017-12-01

    Research infrastructures (RIs) commonly support observational data provided by multiple, independent sources. Uniformity in the data distributed by such RIs is important in most applications, e.g., in comparative studies using data from two or more sources. Achieving uniformity in terms of data quality is challenging, especially considering that many data issues are unpredictable and cannot be detected until a first occurrence of the issue. With that, many data quality control activities within RIs require a manual, human-in-the-loop element, making it an expensive activity. Our motivating example is the FLUXNET2015 dataset - a collection of ecosystem-level carbon, water, and energy fluxes between land and atmosphere from over 200 sites around the world, some sites with over 20 years of data. About 90% of the human effort to create the dataset was spent in data quality related activities. Based on this experience, we have been working on solutions to increase the automation of data quality control procedures. Since it is nearly impossible to fully automate all quality related checks, we have been drawing from the experience with techniques used in software development, which shares a few common constraints. In both managing scientific data and writing software, human time is a precious resource; code bases, as Science datasets, can be large, complex, and full of errors; both scientific and software endeavors can be pursued by individuals, but collaborative teams can accomplish a lot more. The lucrative and fast-paced nature of the software industry fueled the creation of methods and tools to increase automation and productivity within these constraints. Issue tracking systems, methods for translating problems into automated tests, powerful version control tools are a few examples. Terrestrial and aquatic ecosystems research relies heavily on many types of observational data. As volumes of data collection increases, ensuring data quality is becoming an unwieldy challenge for RIs. Business as usual approaches to data quality do not work with larger data volumes. We believe RIs can benefit greatly from adapting and imitating this body of theory and practice from software quality into data quality, enabling systematic and reproducible safeguards against errors and mistakes in datasets as much as in software.

  16. Recommendations on the use of satellite remote-sensing data for urban air quality.

    PubMed

    Engel-Cox, Jill A; Hoff, Raymond M; Haymet, A D J

    2004-11-01

    In the last 5 yr, the capabilities of earth-observing satellites and the technological tools to share and use satellite data have advanced sufficiently to consider using satellite imagery in conjunction with ground-based data for urban-scale air quality monitoring. Satellite data can add synoptic and geospatial information to ground-based air quality data and modeling. An assessment of the integrated use of ground-based and satellite data for air quality monitoring, including several short case studies, was conducted. Findings identified current U.S. satellites with potential for air quality applications, with others available internationally and several more to be launched within the next 5 yr; several of these sensors are described in this paper as illustrations. However, use of these data for air quality applications has been hindered by historical lack of collaboration between air quality and satellite scientists, difficulty accessing and understanding new data, limited resources and agency priorities to develop new techniques, ill-defined needs, and poor understanding of the potential and limitations of the data. Specialization in organizations and funding sources has limited the resources for cross-disciplinary projects. To successfully use these new data sets requires increased collaboration between organizations, streamlined access to data, and resources for project implementation.

  17. APPLICATION OF DATA QUALITY OBJECTIVES AND MEASUREMENT QUALITY OBJECTIVES TO RESEARCH PROJECTS

    EPA Science Inventory

    The paper assists systematic planning for research projects. It presents planning concepts in terms that have some utility for researchers. For example, measurement quality objectives are more familiar to researchers than data quality objectives because these quality criteria are...

  18. Satellite Data of Atmospheric Pollution for U.S. Air Quality Applications: Examples of Applications, Summary of Data End-user Resources, Answers to Faqs, and Common Mistakes to Avoid

    NASA Technical Reports Server (NTRS)

    Duncan, Bryan Neal; Prados, Ana; Lamsal, Lok N.; Liu, Yang; Streets, David G.; Gupta, Pawan; Hilsenrath, Ernest; Kahn, Ralph A.; Nielsen, J. Eric; Beyersdorf, Andreas J.; hide

    2014-01-01

    Satellite data of atmospheric pollutants are becoming more widely used in the decision-making and environmental management activities of public, private sector and non-profit organizations. They are employed for estimating emissions, tracking pollutant plumes, supporting air quality forecasting activities, providing evidence for "exceptional event" declarations, monitoring regional long-term trends, and evaluating air quality model output. However, many air quality managers are not taking full advantage of the data for these applications nor has the full potential of satellite data for air quality applications been realized. A key barrier is the inherent difficulties associated with accessing, processing, and properly interpreting observational data. A degree of technical skill is required on the part of the data end-user, which is often problematic for air quality agencies with limited resources. Therefore, we 1) review the primary uses of satellite data for air quality applications, 2) provide some background information on satellite capabilities for measuring pollutants, 3) discuss the many resources available to the end-user for accessing, processing, and visualizing the data, and 4) provide answers to common questions in plain language.

  19. Satellite Data of Atmospheric Pollution for U.S. Air Quality Applications: Examples of Applications, Summary of Data End-User Resources, Answers to FAQs, and Common Mistakes to Avoid

    NASA Technical Reports Server (NTRS)

    Duncan, Bryan; Prados, Ana I.; Lamsal, Lok; Liu, Yang; Streets, David G.; Gupta, Pawan; Hilsenrath, Ernest; Kahn, Ralph A.; Nielsen, J. Eric; Beyersdorf, Andreas J.; hide

    2014-01-01

    Satellite data of atmospheric pollutants are becoming more widely used in the decision-making and environmental management activities of public, private sector and non-profit organizations. They are employed for estimating emissions, tracking pollutant plumes, supporting air quality forecasting activities, providing evidence for "exceptional event" declarations, monitoring regional long-term trends, and evaluating air quality model output. However, many air quality managers are not taking full advantage of the data for these applications nor has the full potential of satellite data for air quality applications been realized. A key barrier is the inherent difficulties associated with accessing, processing, and properly interpreting observational data. A degree of technical skill is required on the part of the data end-user, which is often problematic for air quality agencies with limited resources. Therefore, we 1) review the primary uses of satellite data for air quality applications, 2) provide some background information on satellite capabilities for measuring pollutants, 3) discuss the many resources available to the end-user for accessing, processing, and visualizing the data, and 4) provide answers to common questions in plain language.

  20. Accuracy Dimensions in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Barsi, Á.; Kugler, Zs.; László, I.; Szabó, Gy.; Abdulmutalib, H. M.

    2018-04-01

    The technological developments in remote sensing (RS) during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications. Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS), which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users' needs. The present paper gives the theoretic overview of the issue, besides selected, practice-oriented approaches are evaluated too, finally widely-used dimension metrics like Root Mean Squared Error (RMSE) or confusion matrix are discussed. The authors present data quality features of well-defined and poorly defined object. The central part of the study is the land cover mapping, describing its accuracy management model, presented relevance and uncertainty measures of its influencing quality dimensions. In the paper theory is supported by a case study, where the remote sensing technology is used for supporting the area-based agricultural subsidies of the European Union, in Hungarian administration.

  1. Quality Analysis of Open Street Map Data

    NASA Astrophysics Data System (ADS)

    Wang, M.; Li, Q.; Hu, Q.; Zhou, M.

    2013-05-01

    Crowd sourcing geographic data is an opensource geographic data which is contributed by lots of non-professionals and provided to the public. The typical crowd sourcing geographic data contains GPS track data like OpenStreetMap, collaborative map data like Wikimapia, social websites like Twitter and Facebook, POI signed by Jiepang user and so on. These data will provide canonical geographic information for pubic after treatment. As compared with conventional geographic data collection and update method, the crowd sourcing geographic data from the non-professional has characteristics or advantages of large data volume, high currency, abundance information and low cost and becomes a research hotspot of international geographic information science in the recent years. Large volume crowd sourcing geographic data with high currency provides a new solution for geospatial database updating while it need to solve the quality problem of crowd sourcing geographic data obtained from the non-professionals. In this paper, a quality analysis model for OpenStreetMap crowd sourcing geographic data is proposed. Firstly, a quality analysis framework is designed based on data characteristic analysis of OSM data. Secondly, a quality assessment model for OSM data by three different quality elements: completeness, thematic accuracy and positional accuracy is presented. Finally, take the OSM data of Wuhan for instance, the paper analyses and assesses the quality of OSM data with 2011 version of navigation map for reference. The result shows that the high-level roads and urban traffic network of OSM data has a high positional accuracy and completeness so that these OSM data can be used for updating of urban road network database.

  2. R2 Water Quality Portal Monitoring Stations

    EPA Pesticide Factsheets

    The Water Quality Data Portal (WQP) provides an easy way to access data stored in various large water quality databases. The WQP provides various input parameters on the form including location, site, sampling, and date parameters to filter and customize the returned results. The The Water Quality Portal (WQP) is a cooperative service sponsored by the United States Geological Survey (USGS), the Environmental Protection Agency (EPA) and the National Water Quality Monitoring Council (NWQMC) that integrates publicly available water quality data from the USGS National Water Information System (NWIS) the EPA STOrage and RETrieval (STORET) Data Warehouse, and the USDA ARS Sustaining The Earth??s Watersheds - Agricultural Research Database System (STEWARDS).

  3. Data extraction from electronic health records (EHRs) for quality measurement of the physical therapy process: comparison between EHR data and survey data.

    PubMed

    Scholte, Marijn; van Dulmen, Simone A; Neeleman-Van der Steen, Catherina W M; van der Wees, Philip J; Nijhuis-van der Sanden, Maria W G; Braspenning, Jozé

    2016-11-08

    With the emergence of the electronic health records (EHRs) as a pervasive healthcare information technology, new opportunities and challenges for use of clinical data for quality measurements arise with respect to data quality, data availability and comparability. The objective of this study is to test whether data extracted from electronic health records (EHRs) was of comparable quality as survey data for the calculation of quality indicators. Data from surveys describing patient cases and filled out by physiotherapists in 2009-2010 were used to calculate scores on eight quality indicators (QIs) to measure the quality of physiotherapy care. In 2011, data was extracted directly from EHRs. The data collection methods were evaluated for comparability. EHR data was compared to survey data on completeness and correctness. Five of the eight QIs could be extracted from the EHRs. Three were omitted from the indicator set, as they proved too difficult to be extracted from the EHRs. Another QI proved incomparable due to errors in the extraction software of some of the EHRs. Three out of four comparable QIs performed better (p < 0.001) in EHR data on completeness. EHR data also proved to be correct; the relative change in indicator scores between EHR and survey data were small (<5 %) in three out of four QIs. Data quality of EHRs was sufficient to be used for the calculation of QIs, although comparability to survey data was problematic. Standardization is needed, not only to be able to compare different data collection methods properly, but also to compare between practices with different EHRs. EHRs have the option to administrate narrative data, but natural language processing tools are needed to quantify these text boxes. Such development, can narrow the comparability gap between scoring QIs based on EHR data and based on survey data. EHRs have the potential to provide real time feedback to professionals and quality measurements for research, but more effort is needed to create unambiguous and uniform information and to unlock written text in a standardized manner.

  4. How do we know? An assessment of integrated community case management data quality in four districts of Malawi.

    PubMed

    Yourkavitch, Jennifer; Zalisk, Kirsten; Prosnitz, Debra; Luhanga, Misheck; Nsona, Humphreys

    2016-11-01

    The World Health Organization contracted annual data quality assessments of Rapid Access Expansion (RAcE) projects to review integrated community case management (iCCM) data quality and the monitoring and evaluation (M&E) system for iCCM, and to suggest ways to improve data quality. The first RAcE data quality assessment was conducted in Malawi in January 2014 and we present findings pertaining to data from the health management information system at the community, facility and other sub-national levels because RAcE grantees rely on that for most of their monitoring data. We randomly selected 10 health facilities (10% of eligible facilities) from the four RAcE project districts, and collected quantitative data with an adapted and comprehensive tool that included an assessment of Malawi's M&E system for iCCM data and a data verification exercise that traced selected indicators through the reporting system. We rated the iCCM M&E system across five function areas based on interviews and observations, and calculated verification ratios for each data reporting level. We also conducted key informant interviews with Health Surveillance Assistants and facility, district and central Ministry of Health staff. Scores show a high-functioning M&E system for iCCM with some deficiencies in data management processes. The system lacks quality controls, including data entry verification, a protocol for addressing errors, and written procedures for data collection, entry, analysis and management. Data availability was generally high except for supervision data. The data verification process identified gaps in completeness and consistency, particularly in Health Surveillance Assistants' record keeping. Staff at all levels would like more training in data management. This data quality assessment illuminates where an otherwise strong M&E system for iCCM fails to ensure some aspects of data quality. Prioritizing data management with documented protocols, additional training and approaches to create efficient supervision practices may improve iCCM data quality. © The Author 2016. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. The Effect of State Regulatory Stringency on Nursing Home Quality

    PubMed Central

    Mukamel, Dana B; Weimer, David L; Harrington, Charlene; Spector, William D; Ladd, Heather; Li, Yue

    2012-01-01

    Objective To test the hypothesis that more stringent quality regulations contribute to better quality nursing home care and to assess their cost-effectiveness. Data Sources/Setting Primary and secondary data from all states and U.S. nursing homes between 2005 and 2006. Study Design We estimated seven models, regressing quality measures on the Harrington Regulation Stringency Index and control variables. To account for endogeneity between regulation and quality, we used instrumental variables techniques. Quality was measured by staffing hours by type per case-mix adjusted day, hotel expenditures, and risk-adjusted decline in activities of daily living, high-risk pressure sores, and urinary incontinence. Data Collection All states' licensing and certification offices were surveyed to obtain data about deficiencies. Secondary data included the Minimum Data Set, Medicare Cost Reports, and the Economic Freedom Index. Principal Findings Regulatory stringency was significantly associated with better quality for four of the seven measures studied. The cost-effectiveness for the activities-of-daily-living measure was estimated at about 72,000 in 2011/ Quality Adjusted Life Year. Conclusions Quality regulations lead to better quality in nursing homes along some dimensions, but not all. Our estimates of cost-effectiveness suggest that increased regulatory stringency is in the ballpark of other acceptable cost-effective practices. PMID:22946859

  6. ASPRS research on quantifying the geometric quality of lidar data

    USGS Publications Warehouse

    Sampath, Aparajithan; Heidemann, Hans K.; Stensaas, Gregory L.; Christopherson, Jon B.

    2014-01-01

    The ASPRS Lidar Cal/Val (calibration/validation) Working Group led by the US Geological Survey (USGS) to establish “Guidelines on Geometric Accuracy and Quality of Lidar Data” has made excellent progress via regular teleconferences and meetings. The group is focused on identifying data quality metrics and establishing a set of guidelines for quantifying the quality of lidar data. The working group has defined and agreed on lidar Data Quality Measures (DQMs) to be used for this purpose. The DQMs are envisaged as the first ever consistent way of checking lidar data. It is expected that these metrics will be used as standard methods for quantifying the geometric quality of lidar data. The goal of this article is to communicate these developments to the readers and the larger geospatial community and invite them to participate in the process.  

  7. Needs Assessment for the Use of NASA Remote Sensing Data for Regulatory Water Quality

    NASA Technical Reports Server (NTRS)

    Spiering, Bruce; Underwood, Lauren

    2010-01-01

    This slide presentation reviews the assessment of the needs that NASA can use for the remote sensing of water quality. The goal of this project is to provide information for decision-making activities (water quality standards) using remotely sensed/satellite based water quality data from MODIS and Landsat data.

  8. Guide for Improving NRS Data Quality: Procedures for Data Collection and Training.

    ERIC Educational Resources Information Center

    Condelli, Larry; Castillo, Laura; Seburn, Mary; Deveaux, Jon

    This guide for improving the quality of National Reporting System for Adult Education (NRS) data through improved data collection and training is intended for local providers and state administrators. Chapter 1 explains the guide's purpose, contents, and use and defines the following components of data quality: objectivity; integrity;…

  9. [Quality assurance using routine data. Is outcome quality now measurable?].

    PubMed

    Kostuj, T; Smektala, R

    2010-12-01

    Health service quality in Germany can be shown by the data from the external quality assurance program (BQS) but as these records are limited to the period of in-hospital stay no information about outcome after discharge from hospital can be obtained. Secondary routine administrative data contain information about long-term outcome, such as mortality, subsequent revision and the need for care following surgical treatment due to a hip fracture.Experiences in the use of secondary data dealing with treatment of hip fractures from the BQS are available in our department. In addition we analyzed routine administrative data from the health insurance companies Knappschaft Bahn-See and AOK in a cooperative study with the WidO (scientific institute of the AOK). These routine data clearly show a bias because of poor quality in coding as well as broad interpretation possibilities of some of the ICD-10 codes used.Consequently quality assurance using routine data is less valid than register-based conclusions. Nevertheless medical expertise is necessary to avoid misinterpretation of routine administrative data.

  10. Statistical Analysis of Fort Hood Quality-of-Life Questionnaire.

    DTIC Science & Technology

    1978-10-01

    The objective of this work was to provide supplementary data analyses of data abstracted from the Quality - of - Life questionnaire developed earlier at...the Fort Hood Field Unit at the request of Headquarters, TRADOC Combined Arms Test Activity (TCATA). The Quality - of - Life questionnaire data were...to the Quality - of - Life questionnaire. These data were then intensively analyzed using analysis of variance and correlational techniques. The results

  11. Water-quality assessment of the central Nebraska basins; summary of data for recent conditions through 1990

    USGS Publications Warehouse

    Zelt, R.B.; Jordan, P.R.

    1993-01-01

    Among the first activities undertaken in each National Water-Quality Assessment (NAWQA) program study-unit investigation are compilation, screening, and statistical summary of available data concerning recent, general water-quality conditions in the study unit. This report (1) identifies which of the existing water-quality data are suitable for characterizing general conditions in a nationally consistent manner and (2) describes, to the extent possible, recent, general water-quality conditions in the Central Nebraska Basins. The study unit con- sists of the area drained by the Platte River between the confluence of the North Platte and South Platte Rivers near North Platte downstream to its confluence with the Missouri River south of Omaha. The report includes (1) a description of the sources and characteristics of water-quality data that are available, (2) a description of the approach used for screening data to identify a subset of the data suitable for summary and comparisons, (3) a presen- tation of the results of statistical and graphical summaries of recent, general water-quality con- ditions, and (4) comparisons of recent, general water-quality conditions to established national water-quality criteria, where applicable. Stream- and lake-water data are summarized for selected sampling sites, and data are summarized by major subunits of the study unit (the Sandhills, Loess Hills, Glaciated Area, and Platte Valley subunits) for streambed-sediment, fish-tissue, aquatic- ecological, and ground-water data. The summaries focus on the central tendencies and typical variation in the data and use nonparametric statistics such as frequencies and percentile values.

  12. [Data supporting quality circle management of inpatient depression treatment].

    PubMed

    Brand, S; Härter, M; Sitta, P; van Calker, D; Menke, R; Heindl, A; Herold, K; Kudling, R; Luckhaus, C; Rupprecht, U; Sanner, Dirk; Schmitz, D; Schramm, E; Berger, M; Gaebel, W; Schneider, F

    2005-07-01

    Several quality assurance initiatives in health care have been undertaken during the past years. The next step consists of systematically combining single initiatives in order to built up a strategic quality management. In a German multicenter study, the quality of inpatient depression treatment was measured in ten psychiatric hospitals. Half of the hospitals received comparative feedback on their individual results in comparison to the other hospitals (bench marking). Those bench markings were used by each hospital as a statistic basis for in-house quality work, to improve the quality of depression treatment. According to hospital differences concerning procedure and outcome, different goals were chosen. There were also differences with respect to structural characteristics, strategies, and outcome. The feedback from participants about data-based quality circles in general and the availability of bench-marking data was positive. The necessity of carefully choosing quality circle members and professional moderation became obvious. Data-based quality circles including bench-marking have proven to be useful for quality management in inpatient depression care.

  13. Quality management in in vivo proton MRS.

    PubMed

    Pedrosa de Barros, Nuno; Slotboom, Johannes

    2017-07-15

    The quality of MR-Spectroscopy data can easily be affected in in vivo applications. Several factors may produce signal artefacts, and often these are not easily detected, not even by experienced spectroscopists. Reliable and reproducible in vivo MRS-data requires the definition of quality requirements and goals, implementation of measures to guarantee quality standards, regular control of data quality, and a continuous search for quality improvement. The first part of this review includes a general introduction to different aspects of quality management in MRS. It is followed by the description of a series of tests and phantoms that can be used to assure the quality of the MR system. In the third part, several methods and strategies used for quality control of the spectroscopy data are presented. This review concludes with a reference to a few interesting techniques and aspects that may help to further improve the quality of in vivo MR-spectra. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Assessing data quality and the variability of source data verification auditing methods in clinical research settings.

    PubMed

    Houston, Lauren; Probst, Yasmine; Martin, Allison

    2018-05-18

    Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.

  15. Where is information quality lost at clinical level? A mixed-method study on information systems and data quality in three urban Kenyan ANC clinics

    PubMed Central

    Hahn, Daniel; Wanjala, Pepela; Marx, Michael

    2013-01-01

    Background Well-working health information systems are considered vital with the quality of health data ranked of highest importance for decision making at patient care and policy levels. In particular, health facilities play an important role, since they are not only the entry point for the national health information system but also use health data (and primarily) for patient care. Design A multiple case study was carried out between March and August 2012 at the antenatal care (ANC) clinics of two private and one public Kenyan hospital to describe clinical information systems and assess the quality of information. The following methods were developed and employed in an iterative process: workplace walkthroughs, structured and in-depth interviews with staff members, and a quantitative assessment of data quality (completeness and accurate transmission of clinical information and reports in ANC). Views of staff and management on the quality of employed information systems, data quality, and influencing factors were captured qualitatively. Results Staff rated the quality of information higher in the private hospitals employing computers than in the public hospital which relies on paper forms. Several potential threats to data quality were reported. Limitations in data quality were common at all study sites including wrong test results, missing registers, and inconsistencies in reports. Feedback was seldom on content or quality of reports and usage of data beyond individual patient care was low. Conclusions We argue that the limited data quality has to be seen in the broader perspective of the information systems in which it is produced and used. The combination of different methods has proven to be useful for this. To improve the effectiveness and capabilities of these systems, combined measures are needed which include technical and organizational aspects (e.g. regular feedback to health workers) and individual skills and motivation. PMID:23993022

  16. Where is information quality lost at clinical level? A mixed-method study on information systems and data quality in three urban Kenyan ANC clinics.

    PubMed

    Hahn, Daniel; Wanjala, Pepela; Marx, Michael

    2013-08-29

    Well-working health information systems are considered vital with the quality of health data ranked of highest importance for decision making at patient care and policy levels. In particular, health facilities play an important role, since they are not only the entry point for the national health information system but also use health data (and primarily) for patient care. A multiple case study was carried out between March and August 2012 at the antenatal care (ANC) clinics of two private and one public Kenyan hospital to describe clinical information systems and assess the quality of information. The following methods were developed and employed in an iterative process: workplace walkthroughs, structured and in-depth interviews with staff members, and a quantitative assessment of data quality (completeness and accurate transmission of clinical information and reports in ANC). Views of staff and management on the quality of employed information systems, data quality, and influencing factors were captured qualitatively. Staff rated the quality of information higher in the private hospitals employing computers than in the public hospital which relies on paper forms. Several potential threats to data quality were reported. Limitations in data quality were common at all study sites including wrong test results, missing registers, and inconsistencies in reports. Feedback was seldom on content or quality of reports and usage of data beyond individual patient care was low. We argue that the limited data quality has to be seen in the broader perspective of the information systems in which it is produced and used. The combination of different methods has proven to be useful for this. To improve the effectiveness and capabilities of these systems, combined measures are needed which include technical and organizational aspects (e.g. regular feedback to health workers) and individual skills and motivation.

  17. Examples of, reasons for, and consequences of the poor quality of wind data from ships for the marine boundary layer: Implications for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierson, W.J. Jr.

    1990-08-15

    Wind reports by data buoys are used to demonstrate that these reports have in the past provided useful values for the synoptic scale winds and that at present these reports provide very reliable values for the synoptic scale winds. Past studies of wind reports by ships have revealed that the data are of poor quality, but the causes for this poor quality are not identified. Examples of the poor quality of wind data from ships are obtained by comparing ship reports with buoy reports and comparing reports of different kinds of ships with each other. These comparisons identify many differentmore » reasons for the poor quality of wind data from ships. Suggestions are made for improving the quality of ship data. The consequences of the poor quality of ship winds are described in terms of the effects on weather and wave forecasts. The implications for remotely sensed winds are discussed.« less

  18. Interpreting drinking water quality in the distribution system using Dempster-Shafer theory of evidence.

    PubMed

    Sadiq, Rehan; Rodriguez, Manuel J

    2005-04-01

    Interpreting water quality data routinely generated for control and monitoring purposes in water distribution systems is a complicated task for utility managers. In fact, data for diverse water quality indicators (physico-chemical and microbiological) are generated at different times and at different locations in the distribution system. To simplify and improve the understanding and the interpretation of water quality, methodologies for aggregation and fusion of data must be developed. In this paper, the Dempster-Shafer theory also called theory of evidence is introduced as a potential methodology for interpreting water quality data. The conceptual basis of this methodology and the process for its implementation are presented by two applications. The first application deals with the interpretation of spatial water quality data fusion, while the second application deals with the development of water quality index based on key monitored indicators. Based on the obtained results, the authors discuss the potential contribution of theory of evidence as a decision-making tool for water quality management.

  19. Evaluation of the visual performance of image processing pipes: information value of subjective image attributes

    NASA Astrophysics Data System (ADS)

    Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.

    2010-01-01

    Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.

  20. An Investigation of Techniques for Detecting Data Anomalies in Earned Value Management Data

    DTIC Science & Technology

    2011-12-01

    Management Studio Harte Hanks Trillium Software Trillium Software System IBM Info Sphere Foundation Tools Informatica Data Explorer Informatica ...Analyst Informatica Developer Informatica Administrator Pitney Bowes Business Insight Spectrum SAP BusinessObjects Data Quality Management DataFlux...menting quality monitoring efforts and tracking data quality improvements Informatica http://www.informatica.com/products_services/Pages/index.aspx

  1. Defining and measuring traffic data quality traffic data quality workshop : white paper.

    DOT National Transportation Integrated Search

    2002-12-31

    Recent research and analyses have identified several issues regarding the quality of traffic data available from intelligent transportation systems for transportation operations, planning, or other functions. The Federal Highway Administration (FHWA)...

  2. Advances in traffic data collection and management : white paper.

    DOT National Transportation Integrated Search

    2003-01-31

    This white paper identifies innovative approaches for improving data quality through Quality Control. Quality Control emphasizes good data by ensuring selection of the most accurate detector then optimizing detector system performance. This is contra...

  3. A Quality Assessment Method Based on Common Distributed Targets for GF-3 Polarimetric SAR Data.

    PubMed

    Jiang, Sha; Qiu, Xiaolan; Han, Bing; Hu, Wenlong

    2018-03-07

    The GaoFen-3 (GF-3) satellite, launched on 10 August 2016, is the first C-band polarimetric synthetic aperture radar (PolSAR) satellite in China. The PolSAR system of GF-3 can collect a significant wealth of information for geophysical research and applications. Being used for related applications, GF-3 PolSAR images must be of good quality. It is necessary to evaluate the quality of polarimetric data and achieve the normalized quality monitoring during 8-year designed life of GF-3. In this study, a new quality assessment method of PolSAR data based on common distributed targets is proposed, and the performance of the method is analyzed by simulations and GF-3 experiments. We evaluate the quality of GF-3 PolSAR data by this method. Results suggest that GF-3 antenna is highly isolated, and the quality of calibrated data satisfies the requests of quantitative applications.

  4. Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.

    PubMed

    Houston, Lauren; Probst, Yasmine; Humphries, Allison

    2015-01-01

    Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.

  5. Development of quality control and instrumentation performance metrics for diffuse optical spectroscopic imaging instruments in the multi-center clinical environment

    NASA Astrophysics Data System (ADS)

    Keene, Samuel T.; Cerussi, Albert E.; Warren, Robert V.; Hill, Brian; Roblyer, Darren; Leproux, AnaÑ--s.; Durkin, Amanda F.; O'Sullivan, Thomas D.; Haghany, Hosain; Mantulin, William W.; Tromberg, Bruce J.

    2013-03-01

    Instrument equivalence and quality control are critical elements of multi-center clinical trials. We currently have five identical Diffuse Optical Spectroscopic Imaging (DOSI) instruments enrolled in the American College of Radiology Imaging Network (ACRIN, #6691) trial located at five academic clinical research sites in the US. The goal of the study is to predict the response of breast tumors to neoadjuvant chemotherapy in 60 patients. In order to reliably compare DOSI measurements across different instruments, operators and sites, we must be confident that the data quality is comparable. We require objective and reliable methods for identifying, correcting, and rejecting low quality data. To achieve this goal, we developed and tested an automated quality control algorithm that rejects data points below the instrument noise floor, improves tissue optical property recovery, and outputs a detailed data quality report. Using a new protocol for obtaining dark-noise data, we applied the algorithm to ACRIN patient data and successfully improved the quality of recovered physiological data in some cases.

  6. Uncertainties in selected river water quality data

    NASA Astrophysics Data System (ADS)

    Rode, M.; Suhr, U.

    2007-02-01

    Monitoring of surface waters is primarily done to detect the status and trends in water quality and to identify whether observed trends arise from natural or anthropogenic causes. Empirical quality of river water quality data is rarely certain and knowledge of their uncertainties is essential to assess the reliability of water quality models and their predictions. The objective of this paper is to assess the uncertainties in selected river water quality data, i.e. suspended sediment, nitrogen fraction, phosphorus fraction, heavy metals and biological compounds. The methodology used to structure the uncertainty is based on the empirical quality of data and the sources of uncertainty in data (van Loon et al., 2005). A literature review was carried out including additional experimental data of the Elbe river. All data of compounds associated with suspended particulate matter have considerable higher sampling uncertainties than soluble concentrations. This is due to high variability within the cross section of a given river. This variability is positively correlated with total suspended particulate matter concentrations. Sampling location has also considerable effect on the representativeness of a water sample. These sampling uncertainties are highly site specific. The estimation of uncertainty in sampling can only be achieved by taking at least a proportion of samples in duplicates. Compared to sampling uncertainties, measurement and analytical uncertainties are much lower. Instrument quality can be stated well suited for field and laboratory situations for all considered constituents. Analytical errors can contribute considerably to the overall uncertainty of river water quality data. Temporal autocorrelation of river water quality data is present but literature on general behaviour of water quality compounds is rare. For meso scale river catchments (500-3000 km2) reasonable yearly dissolved load calculations can be achieved using biweekly sample frequencies. For suspended sediments none of the methods investigated produced very reliable load estimates when weekly concentrations data were used. Uncertainties associated with loads estimates based on infrequent samples will decrease with increasing size of rivers.

  7. Uncertainties in selected surface water quality data

    NASA Astrophysics Data System (ADS)

    Rode, M.; Suhr, U.

    2006-09-01

    Monitoring of surface waters is primarily done to detect the status and trends in water quality and to identify whether observed trends arise form natural or anthropogenic causes. Empirical quality of surface water quality data is rarely certain and knowledge of their uncertainties is essential to assess the reliability of water quality models and their predictions. The objective of this paper is to assess the uncertainties in selected surface water quality data, i.e. suspended sediment, nitrogen fraction, phosphorus fraction, heavy metals and biological compounds. The methodology used to structure the uncertainty is based on the empirical quality of data and the sources of uncertainty in data (van Loon et al., 2006). A literature review was carried out including additional experimental data of the Elbe river. All data of compounds associated with suspended particulate matter have considerable higher sampling uncertainties than soluble concentrations. This is due to high variability's within the cross section of a given river. This variability is positively correlated with total suspended particulate matter concentrations. Sampling location has also considerable effect on the representativeness of a water sample. These sampling uncertainties are highly site specific. The estimation of uncertainty in sampling can only be achieved by taking at least a proportion of samples in duplicates. Compared to sampling uncertainties measurement and analytical uncertainties are much lower. Instrument quality can be stated well suited for field and laboratory situations for all considered constituents. Analytical errors can contribute considerable to the overall uncertainty of surface water quality data. Temporal autocorrelation of surface water quality data is present but literature on general behaviour of water quality compounds is rare. For meso scale river catchments reasonable yearly dissolved load calculations can be achieved using biweekly sample frequencies. For suspended sediments none of the methods investigated produced very reliable load estimates when weekly concentrations data were used. Uncertainties associated with loads estimates based on infrequent samples will decrease with increasing size of rivers.

  8. Ensuring and Improving Information Quality for Earth Science Data and Products Role of the ESIP Information Quality Cluster

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K. (Rama); Peng, Ge; Moroni, David; Shie, Chung-Lin

    2016-01-01

    Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.

  9. Ensuring and Improving Information Quality for Earth Science Data and Products: Role of the ESIP Information Quality Cluster

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram; Peng, Ge; Moroni, David; Shie, Chung-Lin

    2016-01-01

    Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality - scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.

  10. Summary of available state ambient stream-water-quality data, 1990-98, and limitations for national assessment

    USGS Publications Warehouse

    Pope, Larry M.; Rosner, Stacy M.; Hoffman, Darren C.; Ziegler, Andrew C.

    2004-01-01

    The investigation described in this report summarized data from State ambient stream-water-quality monitoring sites for 10 water-quality constituents or measurements (suspended solids, fecal coliform bacteria, ammonia as nitrogen, nitrite plus nitrate as nitrogen, total phosphorus, total arsenic, dissolved solids, chloride, sulfate, and pH). These 10 water-quality constituents or measurements commonly are listed nationally as major contributors to degradation of surface water. Water-quality data were limited to that electronically accessible from the U.S. Environmental Protection Agency Storage and Retrieval System (STORET), the U.S. Geological Survey National Water Information System (NWIS), or individual State databases. Forty-two States had ambient stream-water-quality data electronically accessible for some or all of the constituents or measurements summarized during this investigation. Ambient in this report refers to data collected for the purpose of evaluating stream ecosystems in relation to human health, environmental and ecological conditions, and designated uses. Generally, data were from monitoring sites assessed for State 305(b) reports. Comparisons of monitoring data among States are problematic for several reasons, including differences in the basic spatial design of monitoring networks; water-quality constituents for which samples are analyzed; water-quality criteria to which constituent concentrations are compared; quantity and comprehensiveness of water-quality data; sample collection, processing, and handling; analytical methods; temporal variability in sample collection; and quality-assurance practices. Large differences among the States in number of monitoring sites precluded a general assumption that statewide water-quality conditions were represented by data from these sites. Furthermore, data from individual monitoring sites may not represent water-quality conditions at the sites because sampling conditions and protocols are unknown. Because of these factors, a high level of uncertainty exists in a national assessment of water quality. The purpose of this report is to present a summary of electronically available State ambient stream-water-quality data for 10 selected constituents and measurements from monitoring sites with nine or more analyses for 199098 and to discuss limitations for use of the data for national assessment. These analyses were statistiscally summarized by monitoring site and State, and the results presented in tabular format. Most of the selected constituents or measurements have U.S. Environmental Protection Agency criteria or guidelines for aquatic-life or drinking-water purposes. A significant finding of this investigation is that for a large percentage of monitoring sites in the Nation, there are insufficient data to meet U.S. Environmental Protection Agency recommendations for determining if water-quality conditions are degraded and for making informed decisions regarding total maximum daily loads.

  11. Application of process mining to assess the data quality of routinely collected time-based performance data sourced from electronic health records by validating process conformance.

    PubMed

    Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris

    2016-12-01

    Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.

  12. Integrating multiple data sources in species distribution modeling: A framework for data fusion

    USGS Publications Warehouse

    Pacifici, Krishna; Reich, Brian J.; Miller, David A.W.; Gardner, Beth; Stauffer, Glenn E.; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A.

    2017-01-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species’ occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches (“Shared,” “Correlation,” “Covariates”) for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source (“Single”). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

  13. Water-quality data-collection activities in Colorado and Ohio; Phase III, evaluation of existing data for use in assessing regional water-quality conditions and trends

    USGS Publications Warehouse

    Norris, J. Michael; Hren, Janet; Myers, Donna N.; Chaney, Thomas H.; Childress, Carolyn J. Oblinger

    1990-01-01

    During the past several years, a growing number of questions have been raised by members of Congress and others about the status of current waterquality conditions in the Nation, trends in water quality, and the major factors that affect water-quality conditions and trends. One area of particular interest and concern has been the suitability of existing water-quality data for addressing these types of questions at regional and national scales. In response to these questions and concerns, the U.S. Geological Survey began a pilot study in Colorado and Ohio to (1) determine the characteristics of current water-quality data-collection activities of Federal, State, regional, and local agencies and universities; and (2) determine how well the data from these activities, collected for various purposes and using different procedures, can be used to improve our ability to address the aforementioned questions.Colorado and Ohio were chosen for the pilot study because they represent regions with different types of water-quality issues and programs. The results of the study are specific to the two States and are not intended to be extrapolated to other States.The study was divided into three phases whose objectives were:Phase I Identify and inventory 1984 water-quality data-collection programs, including costs, in Colorado and Ohio, and identify those programs that meet a set of broad criteria for producing data that potentially are appropriate for water-quality assessments of regional and national scope. Phase II Evaluate the quality assurance of field and laboratory procedures used to produce the data from programs that met the broad criteria of Phase I. Phase III Compile the qualifying data from Phase II and evaluate the extent to which the resulting data base can be used to address selected water-quality questions for the two States.This report presents the results of Phase III, focusing on (1) the number of measurements made at each data-collection site for selected constituents, (2) the areal distribution of those sites that have sufficient data for selected types of analyses, and (3) the availability of key ancillary information such as streamflow to address broad-scope questions such as:What are existing water-quality conditions?Has the water quality changed? andHow do existing water-quality conditions and changes in these conditions relate to natural factors and human-induced activities?

  14. Application of flowmeter and depth-dependent water quality data for improved production well construction.

    PubMed

    Gossell, M A; Nishikawa, T; Hanson, R T; Izbicki, J A; Tabidian, M A; Bertine, K

    1999-01-01

    Ground water production wells commonly are designed to maximize well yield and, therefore, may be screened over several water-bearing zones. These water-bearing zones usually are identified, and their hydrogeologic characteristics and water quality are inferred, on the basis of indirect data such as geologic and geophysical logs. Production well designs based on these data may result in wells that are drilled deeper than necessary and are screened through zones having low permeability or poor-quality ground water. In this study, we examined the application of flowmeter logging and depth-dependent water quality samples for the improved design of production wells in a complex hydrogeologic setting. As a demonstration of these techniques, a flowmeter log and depth-dependent water quality data were collected from a long-screened production well within a multilayered coastal aquifer system in the Santa Clara-Calleguas Basin, Ventura County, California. Results showed that the well yields most of its water from four zones that constitute 58% of the screened interval. The importance of these zones to well yield was not readily discernible from indirect geologic or geophysical data. The flowmeter logs and downhole water quality data also show that small quantities of poor-quality water could degrade the overall quality of water from the well. The data obtained from one well can be applied to other proposed wells in the same hydrologic basin. The application of flowmeter and depth-dependent water quality data to well design can reduce installation costs and improve the quantity and quality of water produced from wells in complex multiple-aquifer systems.

  15. Application of flowmeter and depth-dependent water quality data for improved production well construction

    USGS Publications Warehouse

    Gossell, M.A.; Nishikawa, Tracy; Hanson, Randall T.; Izbicki, John A.; Tabidian, M.A.; Bertine, K.

    1999-01-01

    Ground water production wells commonly are designed to maximize well yield and, therefore, may be screened over several water-bearing zones. These water-bearing zones usually are identified, and their hydrogeologic characteristics and water quality are inferred, on the basis of indirect data such as geologic and geophysical logs. Production well designs based on these data may result in wells that are drilled deeper than necessary and are screened through zones having low permeability or poor-quality ground water. In this study, we examined the application of flowmeter logging and depth-dependent water quality samples for the improved design of production wells in a complex hydrogeologic setting. As a demonstration of these techniques, a flowmeter log and depth-dependent water quality data were collected from a long-screened production well within a multilayered coastal aquifer system in the Santa Clara-Calleguas Basin, Ventura County, California. Results showed that the well yields most of its water from four zones that constitute 58% of the screened interval. The importance of these zones to well yield was not readily discernible from indirect geologic or geophysical data. The flowmeter logs and downhole water quality data also show that small quantities of poor-quality water could degrade the overall quality of water from the well. The data obtained from one well can be applied to other proposed wells in the same hydrologic basin. The application of flowmeter and depth-dependent water quality data to well design can reduce installation costs and improve the quantity and quality of water produced from wells in complex multiple-aquifer systems.

  16. Improving the quality of EHR recording in primary care: a data quality feedback tool.

    PubMed

    van der Bij, Sjoukje; Khan, Nasra; Ten Veen, Petra; de Bakker, Dinny H; Verheij, Robert A

    2017-01-01

    Electronic health record (EHR) data are used to exchange information among health care providers. For this purpose, the quality of the data is essential. We developed a data quality feedback tool that evaluates differences in EHR data quality among practices and software packages as part of a larger intervention. The tool was applied in 92 practices in the Netherlands using different software packages. Practices received data quality feedback in 2010 and 2012. We observed large differences in the quality of recording. For example, the percentage of episodes of care that had a meaningful diagnostic code ranged from 30% to 100%. Differences were highly related to the software package. A year after the first measurement, the quality of recording had improved significantly and differences decreased, with 67% of the physicians indicating that they had actively changed their recording habits based on the results of the first measurement. About 80% found the feedback helpful in pinpointing recording problems. One of the software vendors made changes in functionality as a result of the feedback. Our EHR data quality feedback tool is capable of highlighting differences among practices and software packages. As such, it also stimulates improvements. As substantial variability in recording is related to the software package, our study strengthens the evidence that data quality can be improved substantially by standardizing the functionalities of EHR software packages. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. 76 FR 77739 - Approval and Promulgation of Air Quality Implementation Plans; Massachusetts and New Hampshire...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-14

    ... in EPA's Air Quality System (AQS) database. To account for missing data, the procedures found in... determination is based upon complete, certified, quality-assured ambient air quality monitoring data for the... proposing? II. What is the background for this proposed action? III. What is EPA's analysis of data for...

  18. Enhancement of the Automated Quality Control Procedures for the International Soil Moisture Network

    NASA Astrophysics Data System (ADS)

    Heer, Elsa; Xaver, Angelika; Dorigo, Wouter; Messner, Romina

    2017-04-01

    In-situ soil moisture observations are still trusted to be the most reliable data to validate remotely sensed soil moisture products. Thus, the quality of in-situ soil moisture observations is of high importance. The International Soil Moisture Network (ISMN; http://ismn.geo.tuwien.ac.at/) provides in-situ soil moisture data from all around the world. The data is collected from individual networks and data providers, measured by different sensors in various depths. The data sets which are delivered in different units, time zones and data formats are then transformed into homogeneous data sets. An erroneous behavior of soil moisture data is very difficult to detect, due to annual and daily changes and most significantly the high influence of precipitation and snow melting processes. Only few of the network providers have a quality assessment for their data sets. Therefore, advanced quality control procedures have been developed for the ISMN (Dorigo et al. 2013). Three categories of quality checks were introduced: exceeding boundary values, geophysical consistency checks and a spectrum based approach. The spectrum based quality control algorithms aim to detect erroneous measurements which occur within plausible geophysical ranges, e.g. a sudden drop in soil moisture caused by a sensor malfunction. By defining several conditions which have to be met by the original soil moisture time series and their first and second derivative, such error types can be detected. Since the development of these sophisticated methods many more data providers shared their data with the ISMN and new types of erroneous measurements were identified. Thus, an enhancement of the automated quality control procedures became necessary. In the present work, we introduce enhancements of the existing quality control algorithms. Additionally, six completely new quality checks have been developed, e.g. detection of suspicious values before or after NAN-values, constant values and values that lie in a spectrum where a high majority of values before and after is flagged and therefore a sensor malfunction is certain. For the evaluation of the enhanced automated quality control system many test data sets were chosen, and manually validated to be compared to the existing quality control procedures and the new algorithms. Improvements will be shown that assure an appropriate assessment of the ISMN data sets, which are used for validations of soil moisture data retrieved by satellite data and are the foundation many other scientific publications.

  19. 40 CFR 58.2 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... data upon which to base national assessments and policy decisions. ... ambient air quality and for reporting ambient air quality data and related information. The monitoring criteria pertain to the following areas: (1) Quality assurance procedures for monitor operation and data...

  20. SPATIAL PREDICTION OF AIR QUALITY DATA

    EPA Science Inventory

    Site-specific air quality monitoring data have been used extensively in both scientific and regulatory programs. As such, these data provide essential information to the public, environmental managers, and the atmospheric research community. Currently, air quality management prac...

  1. Data quality white paper.

    DOT National Transportation Integrated Search

    2008-06-01

    This paper looks at the issue of data quality within the context of transportation operations and management. The objective of this paper is to investigate data quality measures and how they are applied in existing systems. This paper explores the re...

  2. Traffic Data Quality Measurement : Final Report

    DOT National Transportation Integrated Search

    2004-09-15

    One of the foremost recommendations from the FHWA sponsored workshops on Traffic Data Quality (TDQ) in 2003 was a call for "guidelines and standards for calculating data quality measures." These guidelines and standards are expected to contain method...

  3. Assessing Public Metabolomics Metadata, Towards Improving Quality.

    PubMed

    Ferreira, João D; Inácio, Bruno; Salek, Reza M; Couto, Francisco M

    2017-12-13

    Public resources need to be appropriately annotated with metadata in order to make them discoverable, reproducible and traceable, further enabling them to be interoperable or integrated with other datasets. While data-sharing policies exist to promote the annotation process by data owners, these guidelines are still largely ignored. In this manuscript, we analyse automatic measures of metadata quality, and suggest their application as a mean to encourage data owners to increase the metadata quality of their resources and submissions, thereby contributing to higher quality data, improved data sharing, and the overall accountability of scientific publications. We analyse these metadata quality measures in the context of a real-world repository of metabolomics data (i.e. MetaboLights), including a manual validation of the measures, and an analysis of their evolution over time. Our findings suggest that the proposed measures can be used to mimic a manual assessment of metadata quality.

  4. Use of ocean color scanner data in water quality mapping

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1981-01-01

    Remotely sensed data, in combination with in situ data, are used in assessing water quality parameters within the San Francisco Bay-Delta. The parameters include suspended solids, chlorophyll, and turbidity. Regression models are developed between each of the water quality parameter measurements and the Ocean Color Scanner (OCS) data. The models are then extended to the entire study area for mapping water quality parameters. The results include a series of color-coded maps, each pertaining to one of the water quality parameters, and the statistical analysis of the OCS data and regression models. It is found that concurrently collected OCS data and surface truth measurements are highly useful in mapping the selected water quality parameters and locating areas having relatively high biological activity. In addition, it is found to be virtually impossible, at least within this test site, to locate such areas on U-2 color and color-infrared photography.

  5. Spatial Data Quality Control Procedure applied to the Okavango Basin Information System

    NASA Astrophysics Data System (ADS)

    Butchart-Kuhlmann, Daniel

    2014-05-01

    Spatial data is a powerful form of information, capable of providing information of great interest and tremendous use to a variety of users. However, much like other data representing the 'real world', precision and accuracy must be high for the results of data analysis to be deemed reliable and thus applicable to real world projects and undertakings. The spatial data quality control (QC) procedure presented here was developed as the topic of a Master's thesis, in the sphere of and using data from the Okavango Basin Information System (OBIS), itself a part of The Future Okavango (TFO) project. The aim of the QC procedure was to form the basis of a method through which to determine the quality of spatial data relevant for application to hydrological, solute, and erosion transport modelling using the Jena Adaptable Modelling System (JAMS). As such, the quality of all data present in OBIS classified under the topics of elevation, geoscientific information, or inland waters, was evaluated. Since the initial data quality has been evaluated, efforts are underway to correct the errors found, thus improving the quality of the dataset.

  6. The "Consumer Report" version of Earth Science Data Quality description

    NASA Astrophysics Data System (ADS)

    Vicente, G. A.

    2014-12-01

    The generation, delivery and access of Earth Observation (EO) data quality information is a difficult problem because it is not uniquely defined, user dependent, difficult to be quantified, handled differently by different teams and perceived differently by data providers and data users. Initiatives such as the International Organization for Standards (ISO) 19115 and 19157 are important steps forward but difficult to implement, too complex and out of reach for the majority of data producers and users. This is because most users only want a quick and intelligible way to compare data sets from different providers to find the ones that best fit their interest. Therefore we need to simplify the problem by focusing on a few relevant quality parameters and develop a common framework to deliver them. This work is intended to tap into the data producers and user's knowledge and expertise on data quality for the development and adoption of a "Consumer Report" version of a "Data Quality Matrix". The goal is to find the most efficient and friendly approach to displays a selected number of quality parameters rated to each product and to target group of users.

  7. Measures and Indicators of Vgi Quality: AN Overview

    NASA Astrophysics Data System (ADS)

    Antoniou, V.; Skopeliti, A.

    2015-08-01

    The evaluation of VGI quality has been a very interesting and popular issue amongst academics and researchers. Various metrics and indicators have been proposed for evaluating VGI quality elements. Various efforts have focused on the use of well-established methodologies for the evaluation of VGI quality elements against authoritative data. In this paper, a number of research papers have been reviewed and summarized in a detailed report on measures for each spatial data quality element. Emphasis is given on the methodology followed and the data used in order to assess and evaluate the quality of the VGI datasets. However, as the use of authoritative data is not always possible many researchers have turned their focus on the analysis of new quality indicators that can function as proxies for the understanding of VGI quality. In this paper, the difficulties in using authoritative datasets are briefly presented and new proposed quality indicators are discussed, as recorded through the literature review. We classify theses new indicators in four main categories that relate with: i) data, ii) demographics, iii) socio-economic situation and iv) contributors. This paper presents a dense, yet comprehensive overview of the research on this field and provides the basis for the ongoing academic effort to create a practical quality evaluation method through the use of appropriate quality indicators.

  8. Guidance on Data Quality Assessment for Life Cycle Inventory Data

    EPA Science Inventory

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities wit...

  9. Groundwater-quality data from the eastern Snake River Plain Aquifer, Jerome and Gooding Counties, south-central Idaho, 2017

    USGS Publications Warehouse

    Skinner, Kenneth D.

    2018-05-11

    Groundwater-quality samples and water-level data were collected from 36 wells in the Jerome/Gooding County area of the eastern Snake River Plain aquifer during June 2017. The wells included 30 wells sampled for the U.S. Geological Survey’s National Water-Quality Assessment project, plus an additional 6 wells were selected to increase spatial distribution. The data provide water managers with the ability for an improved understanding of groundwater quality and flow directions in the area. Groundwater-quality samples were analyzed for nutrients, major ions, trace elements, and stable isotopes of water. Quality-assurance and quality-control measures consisted of multiple blank samples and a sequential replicate sample. All data are available online at the USGS National Water Information System.

  10. Harmonisation Initiatives of Copernicus Data Quality Control

    NASA Astrophysics Data System (ADS)

    Vescovi, F. D.; Lankester, T.; Coleman, E.; Ottavianelli, G.

    2015-04-01

    The Copernicus Space Component Data Access system (CSCDA) incorporates data contributions from a wide range of satellite missions. Through EO data handling and distribution, CSCDA serves a set of Copernicus Services related to Land, Marine and Atmosphere Monitoring, Emergency Management and Security and Climate Change. The quality of the delivered EO products is the responsibility of each contributing mission, and the Copernicus data Quality Control (CQC) service supports and complements such data quality control activities. The mission of the CQC is to provide a service of quality assessment on the provided imagery, to support the investigation related to product quality anomalies, and to guarantee harmonisation and traceability of the quality information. In terms of product quality control, the CQC carries out analysis of representative sample products for each contributing mission as well as coordinating data quality investigation related to issues found or raised by Copernicus users. Results from the product analysis are systematically collected and the derived quality reports stored in a searchable database. The CQC service can be seen as a privileged focal point with unique comparison capacities over the data providers. The comparison among products from different missions suggests the need for a strong, common effort of harmonisation. Technical terms, definitions, metadata, file formats, processing levels, algorithms, cal/val procedures etc. are far from being homogeneous, and this may generate inconsistencies and confusion among users of EO data. The CSCDA CQC team plays a significant role in promoting harmonisation initiatives across the numerous contributing missions, so that a common effort can achieve optimal complementarity and compatibility among the EO data from multiple data providers. This effort is done in coordination with important initiatives already working towards these goals (e.g. INSPIRE directive, CEOS initiatives, OGC standards, QA4EO etc.). This paper describes the main actions being undertaken by CQC to encourage harmonisation among space-based EO systems currently in service.

  11. A pilot study of routine immunization data quality in Bunza Local Government area: causes and possible remedies.

    PubMed

    Omoleke, Semeeh Akinwale; Tadesse, Menberu Getachew

    2017-01-01

    As a result of poor quality administrative data for routine immunisation (RI) in Nigeria, the real coverage of RI remains unknown, constituting a setback in curtailing vaccine preventable diseases (VPDs). Consequently, the purpose of this pilot study is to identify source(s) and evaluate the magnitude of poor data quality as well as propose recommendations to address the problem. The authors conducted a cross-sectional study in which 5 out of the 22 health facilities providing routine immunization services in Bunza Local Government Area (LGA), Kebbi State, Nigeria, were selected for data quality assessment. The reported coverage of RI in August and September, 2016 was the primary element of evaluation in the selected Health Facilities (HFs). Administered questionnaires were adapted from WHO Data Quality Assurance and RI monitoring tools to generate data from the HFs, as well as standardised community survey tool for household surveys. Data inconsistency was detected in 100% of the selected HFs. Maximum difference between HF monthly summary and RI registration book for penta 3 data quality report analysis was 820% and 767% in MCH Bunza and PHC Balu respectively. However, a minimum difference of 3% was observed at Loko Dispensary. Maximum difference between HF summary and RI registration for measles was 614% at MCH Bunza and 43% minimum difference at Loko. In contrast to the administrative coverage, 60-80% of the children sampled from households were either not immunised or partially immunised. Further, the main sources of poor data quality include heavy workload on RI providers, over-reliance on administrative coverage report, and lack of understanding of the significance of high data quality by RI providers. Substantial data discrepancies were observed in RI reports from all the Health Facilities which is indicative of poor data quality at the LGA level. Community surveys also revealed an over-reporting from administrative coverage data. Consequently, efforts should be geared towards achieving good data quality by immunisation stakeholders as it has implication on disease prevention and control efforts.

  12. A pilot study of routine immunization data quality in Bunza Local Government area: causes and possible remedies

    PubMed Central

    Omoleke, Semeeh Akinwale; Tadesse, Menberu Getachew

    2017-01-01

    Introduction As a result of poor quality administrative data for routine immunisation (RI) in Nigeria, the real coverage of RI remains unknown, constituting a setback in curtailing vaccine preventable diseases (VPDs). Consequently, the purpose of this pilot study is to identify source(s) and evaluate the magnitude of poor data quality as well as propose recommendations to address the problem. Methods The authors conducted a cross-sectional study in which 5 out of the 22 health facilities providing routine immunization services in Bunza Local Government Area (LGA), Kebbi State, Nigeria, were selected for data quality assessment. The reported coverage of RI in August and September, 2016 was the primary element of evaluation in the selected Health Facilities (HFs). Administered questionnaires were adapted from WHO Data Quality Assurance and RI monitoring tools to generate data from the HFs, as well as standardised community survey tool for household surveys. Results Data inconsistency was detected in 100% of the selected HFs. Maximum difference between HF monthly summary and RI registration book for penta 3 data quality report analysis was 820% and 767% in MCH Bunza and PHC Balu respectively. However, a minimum difference of 3% was observed at Loko Dispensary. Maximum difference between HF summary and RI registration for measles was 614% at MCH Bunza and 43% minimum difference at Loko. In contrast to the administrative coverage, 60-80% of the children sampled from households were either not immunised or partially immunised. Further, the main sources of poor data quality include heavy workload on RI providers, over-reliance on administrative coverage report, and lack of understanding of the significance of high data quality by RI providers. Conclusion Substantial data discrepancies were observed in RI reports from all the Health Facilities which is indicative of poor data quality at the LGA level. Community surveys also revealed an over-reporting from administrative coverage data. Consequently, efforts should be geared towards achieving good data quality by immunisation stakeholders as it has implication on disease prevention and control efforts. PMID:28979641

  13. The swiss neonatal quality cycle, a monitor for clinical performance and tool for quality improvement

    PubMed Central

    2013-01-01

    Background We describe the setup of a neonatal quality improvement tool and list which peer-reviewed requirements it fulfils and which it does not. We report on the so-far observed effects, how the units can identify quality improvement potential, and how they can measure the effect of changes made to improve quality. Methods Application of a prospective longitudinal national cohort data collection that uses algorithms to ensure high data quality (i.e. checks for completeness, plausibility and reliability), and to perform data imaging (Plsek’s p-charts and standardized mortality or morbidity ratio SMR charts). The collected data allows monitoring a study collective of very low birth-weight infants born from 2009 to 2011 by applying a quality cycle following the steps ′guideline – perform - falsify – reform′. Results 2025 VLBW live-births from 2009 to 2011 representing 96.1% of all VLBW live-births in Switzerland display a similar mortality rate but better morbidity rates when compared to other networks. Data quality in general is high but subject to improvement in some units. Seven measurements display quality improvement potential in individual units. The methods used fulfil several international recommendations. Conclusions The Quality Cycle of the Swiss Neonatal Network is a helpful instrument to monitor and gradually help improve the quality of care in a region with high quality standards and low statistical discrimination capacity. PMID:24074151

  14. Comparison of 2008-2009 water years and historical water-quality data, upper Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Solberg, Patricia A.; Moore, Bryan; Blacklock, Ty D.

    2012-01-01

    Population growth and changes in land use have the potential to affect water quality and quantity in the upper Gunnison River Basin. In 1995, the U.S. Geological Survey (USGS), in cooperation with the Bureau of Land Management, City of Gunnison, Colorado River Water Conservation District, Crested Butte South Metropolitan District, Gunnison County, Hinsdale County, Mount Crested Butte Water and Sanitation District, National Park Service, Town of Crested Butte, U.S. Forest Service, Upper Gunnison River Water Conservancy District, and Western State College, established a water-quality monitoring program in the upper Gunnison River Basin to characterize current water-quality conditions and to assess the effects of increased urban development and other land-use changes on water quality. The monitoring network has evolved into two groups of sites: (1) sites that are considered long term and (2) sites that are considered rotational. Data from the long-term sites assist in defining temporal changes in water quality (how conditions may change over time). The rotational sites assist in the spatial definition of water-quality conditions (how conditions differ throughout the basin) and address local and short-term concerns. Biannual summaries of the water-quality data from the monitoring network provide a point of reference for stakeholder discussions regarding the location and purpose of water-quality monitoring sites in the upper Gunnison River Basin. This report compares and summarizes the data collected during water years 2008 and 2009 to the historical data available at these sites. The introduction provides a map of the sampling sites, definitions of terms, and a one-page summary of selected water-quality conditions at the network sites. The remainder of the report is organized around the data collected at individual sites. Data collected during water years 2008 and 2009 are compared to historical data, State water-quality standards, and Federal water-quality guidelines. A seasonal Kendall test for trend analysis is completed when there is sufficient data (typically >5 years) at the station. Data were collected following USGS protocols.

  15. Review of Available Water-Quality Data for the Southern Colorado Plateau Network and Characterization of Water Quality in Five Selected Park Units in Arizona, Colorado, New Mexico, and Utah, 1925 to 2004

    USGS Publications Warehouse

    Brown, Juliane B.

    2008-01-01

    Historical water-quality data in the National Park Service Southern Colorado Plateau Network have been collected irregularly and with little followup interpretation, restricting the value of the data. To help address these issues, to inform future water-quality monitoring planning efforts, and to address relevant National Park Service Inventory and Monitoring Program objectives, the U.S. Geological Survey, in cooperation with the National Park Service, compiled, reviewed, and summarized available historical water-quality data for 19 park units in the Southern Colorado Plateau Network. The data are described in terms of availability by major water-quality classes, park unit, site type, and selected identified water sources. The report also describes the geology, water resources, water-quality issues, data gaps, and water-quality standard exceedances identified in five of the park units determined to be of high priority. The five park units are Bandelier National Monument in New Mexico, Canyon de Chelly National Monument in Arizona, Chaco Culture National Historical Park in New Mexico, Glen Canyon National Recreation Area in Arizona and Utah, and Mesa Verde National Park in Colorado. Statistical summaries of water-quality characteristics are presented and considerations for future water-quality monitoring are provided for these five park units.

  16. General introduction for the “National Field Manual for the Collection of Water-Quality Data”

    USGS Publications Warehouse

    ,

    2018-02-28

    BackgroundAs part of its mission, the U.S. Geological Survey (USGS) collects data to assess the quality of our Nation’s water resources. A high degree of reliability and standardization of these data are paramount to fulfilling this mission. Documentation of nationally accepted methods used by USGS personnel serves to maintain consistency and technical quality in data-collection activities. “The National Field Manual for the Collection of Water-Quality Data” (NFM) provides documented guidelines and protocols for USGS field personnel who collect water-quality data. The NFM provides detailed, comprehensive, and citable procedures for monitoring the quality of surface water and groundwater. Topics in the NFM include (1) methods and protocols for sampling water resources, (2) methods for processing samples for analysis of water quality, (3) methods for measuring field parameters, and (4) specialized procedures, such as sampling water for low levels of mercury and organic wastewater chemicals, measuring biological indicators, and sampling bottom sediment for chemistry. Personnel who collect water-quality data for national USGS programs and projects, including projects supported by USGS cooperative programs, are mandated to use protocols provided in the NFM per USGS Office of Water Quality Technical Memorandum 2002.13. Formal training, for example, as provided in the USGS class, “Field Water-Quality Methods for Groundwater and Surface Water,” and field apprenticeships supplement the guidance provided in the NFM and ensure that the data collected are high quality, accurate, and scientifically defensible.

  17. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    NASA Astrophysics Data System (ADS)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  18. Explore the advantage of High-frequency Water Quality Data in Urban Surface Water: A Case Study in Bristol, UK

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Han, D.

    2017-12-01

    Water system is an essential component in a smart city for its sustainability and resilience. The freshness and beauty of the water body would please people as well as benefit the local aquatic ecosystems. Water quality monitoring approach has evolved from the manual lab-based monitoring approach to the manual in-situ monitoring approach, and finally to the latest wireless-sensor-network (WSN) based solutions in recent decades. The development of the in-situ water quality sensors enable humans to collect high-frequency and real-time water quality data. This poster aims to explore the advantages of the high-frequency water quality data over the low-frequency data collected manually. The data is collected by a remote real-time high-frequency water quality monitor system based on the cutting edge smart city infrastructure in Bristol - `Bristol Is Open'. The water quality of Bristol Floating Harbour is monitored which is the focal area of Bristol with new buildings and features redeveloped in the past decades. This poster will first briefly introduce the water quality monitoring system, followed by the analysis of the advantages of the sub-hourly water quality data. Thus, the suggestion on the monitoring frequency will be given.

  19. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    PubMed Central

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22053864

  20. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)*

    PubMed Central

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993

  1. The data quality analyzer: a quality control program for seismic data

    USGS Publications Warehouse

    Ringler, Adam; Hagerty, M.T.; Holland, James F.; Gonzales, A.; Gee, Lind S.; Edwards, J.D.; Wilson, David; Baker, Adam

    2015-01-01

    The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a “grade” for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.

  2. Groundwater-quality and quality-control data for two monitoring wells near Pavillion, Wyoming, April and May 2012

    USGS Publications Warehouse

    Wright, Peter R.; McMahon, Peter B.; Mueller, David K.; Clark, Melanie L.

    2012-01-01

    In June 2010, the U.S. Environmental Protection Agency installed two deep monitoring wells (MW01 and MW02) near Pavillion, Wyoming, to study groundwater quality. During April and May 2012, the U.S Geological Survey, in cooperation with the Wyoming Department of Environmental Quality, collected groundwater-quality data and quality-control data from monitoring well MW01 and, following well redevelopment, quality-control data for monitoring well MW02. Two groundwater-quality samples were collected from well MW01—one sample was collected after purging about 1.5 borehole volumes, and a second sample was collected after purging 3 borehole volumes. Both samples were collected and processed using methods designed to minimize atmospheric contamination or changes to water chemistry. Groundwater-quality samples were analyzed for field water-quality properties (water temperature, pH, specific conductance, dissolved oxygen, oxidation potential); inorganic constituents including naturally occurring radioactive compounds (radon, radium-226 and radium-228); organic constituents; dissolved gasses; stable isotopes of methane, water, and dissolved inorganic carbon; and environmental tracers (carbon-14, chlorofluorocarbons, sulfur hexafluoride, tritium, helium, neon, argon, krypton, xenon, and the ratio of helium-3 to helium-4). Quality-control sample results associated with well MW01 were evaluated to determine the extent to which environmental sample analytical results were affected by bias and to evaluate the variability inherent to sample collection and laboratory analyses. Field documentation, environmental data, and quality-control data for activities that occurred at the two monitoring wells during April and May 2012 are presented.

  3. Data governance in predictive toxicology: A review.

    PubMed

    Fu, Xin; Wojak, Anna; Neagu, Daniel; Ridley, Mick; Travis, Kim

    2011-07-13

    Due to recent advances in data storage and sharing for further data processing in predictive toxicology, there is an increasing need for flexible data representations, secure and consistent data curation and automated data quality checking. Toxicity prediction involves multidisciplinary data. There are hundreds of collections of chemical, biological and toxicological data that are widely dispersed, mostly in the open literature, professional research bodies and commercial companies. In order to better manage and make full use of such large amount of toxicity data, there is a trend to develop functionalities aiming towards data governance in predictive toxicology to formalise a set of processes to guarantee high data quality and better data management. In this paper, data quality mainly refers in a data storage sense (e.g. accuracy, completeness and integrity) and not in a toxicological sense (e.g. the quality of experimental results). This paper reviews seven widely used predictive toxicology data sources and applications, with a particular focus on their data governance aspects, including: data accuracy, data completeness, data integrity, metadata and its management, data availability and data authorisation. This review reveals the current problems (e.g. lack of systematic and standard measures of data quality) and desirable needs (e.g. better management and further use of captured metadata and the development of flexible multi-level user access authorisation schemas) of predictive toxicology data sources development. The analytical results will help to address a significant gap in toxicology data quality assessment and lead to the development of novel frameworks for predictive toxicology data and model governance. While the discussed public data sources are well developed, there nevertheless remain some gaps in the development of a data governance framework to support predictive toxicology. In this paper, data governance is identified as the new challenge in predictive toxicology, and a good use of it may provide a promising framework for developing high quality and easy accessible toxicity data repositories. This paper also identifies important research directions that require further investigation in this area.

  4. Data governance in predictive toxicology: A review

    PubMed Central

    2011-01-01

    Background Due to recent advances in data storage and sharing for further data processing in predictive toxicology, there is an increasing need for flexible data representations, secure and consistent data curation and automated data quality checking. Toxicity prediction involves multidisciplinary data. There are hundreds of collections of chemical, biological and toxicological data that are widely dispersed, mostly in the open literature, professional research bodies and commercial companies. In order to better manage and make full use of such large amount of toxicity data, there is a trend to develop functionalities aiming towards data governance in predictive toxicology to formalise a set of processes to guarantee high data quality and better data management. In this paper, data quality mainly refers in a data storage sense (e.g. accuracy, completeness and integrity) and not in a toxicological sense (e.g. the quality of experimental results). Results This paper reviews seven widely used predictive toxicology data sources and applications, with a particular focus on their data governance aspects, including: data accuracy, data completeness, data integrity, metadata and its management, data availability and data authorisation. This review reveals the current problems (e.g. lack of systematic and standard measures of data quality) and desirable needs (e.g. better management and further use of captured metadata and the development of flexible multi-level user access authorisation schemas) of predictive toxicology data sources development. The analytical results will help to address a significant gap in toxicology data quality assessment and lead to the development of novel frameworks for predictive toxicology data and model governance. Conclusions While the discussed public data sources are well developed, there nevertheless remain some gaps in the development of a data governance framework to support predictive toxicology. In this paper, data governance is identified as the new challenge in predictive toxicology, and a good use of it may provide a promising framework for developing high quality and easy accessible toxicity data repositories. This paper also identifies important research directions that require further investigation in this area. PMID:21752279

  5. Predicting Causes of Data Quality Issues in a Clinical Data Research Network.

    PubMed

    Khare, Ritu; Ruth, Byron J; Miller, Matthew; Tucker, Joshua; Utidjian, Levon H; Razzaghi, Hanieh; Patibandla, Nandan; Burrows, Evanette K; Bailey, L Charles

    2018-01-01

    Clinical data research networks (CDRNs) invest substantially in identifying and investigating data quality problems. While identification is largely automated, the investigation and resolution are carried out manually at individual institutions. In the PEDSnet CDRN, we found that only approximately 35% of the identified data quality issues are resolvable as they are caused by errors in the extract-transform-load (ETL) code. Nonetheless, with no prior knowledge of issue causes, partner institutions end up spending significant time investigating issues that represent either inherent data characteristics or false alarms. This work investigates whether the causes (ETL, Characteristic, or False alarm) can be predicted before spending time investigating issues. We trained a classifier on the metadata from 10,281 real-world data quality issues, and achieved a cause prediction F1-measure of up to 90%. While initially tested on PEDSnet, the proposed methodology is applicable to other CDRNs facing similar bottlenecks in handling data quality results.

  6. ESGF and WDCC: The Double Structure of the Digital Data Storage at DKRZ

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Höck, H.

    2016-12-01

    Since a couple of years, Digital Repositories of climate science face new challenges: International projects are global collaborations. The data storage in parallel moved to federated, distributed storage systems like ESGF. For the long term archival storage (LTA) on the other hand, communities, funders, and data users make stronger demands for data and metadata quality to facilitate data use and reuse. At DKRZ, this situation led to a twofold data dissemination system - a situation which has influence on administration, workflows, and sustainability of the data. The ESGF system is focused on the needs of users as partners in global projects. It includes replication tools, detailed global project standards, and efficient search for the data to download. In contrast, DKRZ's classical CERA LTA storage aims for long term data holding and data curation as well as for data reuse requiring high metadata quality standards. In addition, for LTA data a Digital Object Identifier publication service for the direct integration of research data in scientific publications has been implemented. The editorial process at DKRZ-LTA ensures the quality of metadata and research data. The DOI and a citation code are provided and afterwards registered under DataCite's (datacite.org) regulations. In the overall data life cycle continuous reliability of the data and metadata quality is essential to allow for data handling at Petabytes level, data long term usability, and adequate publication of the results. These considerations lead to the question "What is quality" - with respect to data, to the repository itself, to the publisher, and the user? Global consensus is needed for these assessments as the phases of the end to end workflow gear into each other: For data and metadata, checks need to go hand in hand with the processes of production and storage. The results can be judged following a Quality Maturity Matrix (QMM). Repositories can be certified according to their trustworthiness. For the publication of any scientific conclusions, scientific community, funders, media, and policy makers ask for the publisher's impact in terms of readers' credit, run, and presentation quality. The paper describes the data life cycle. Emphasis is put on the different levels of quality assessment which at DKRZ ensure the data and metadata quality.

  7. Discovering and Responding to the Challenges of Data Quality Throughout the Data Lifecycle

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.

    2014-12-01

    Data quality is perhaps one of the most valuable yet misunderstood and unresolved elements of the science data life cycle. This is not without significant effort by many within the international science data community to help develop and improve the meaning of data quality, corresponding standards, tools, and services which, when properly applied, collectively serve the interests of the data provider, data center, and ultimately the end user. It is often thought that the concerns of data quality should be primarily focused on ensuring science data is well characterized and understood by the end user. Although this is a crucial goal, the common result of this singular emphasis is a tendency toward dataset-specific solutions, which are often not planned with long-term preservation in mind. Given the recent flurry and plethora of existing tools and standards with which many of the data quality concerns may be addressed, it can almost be a lifelong pursuit for a single data user or provider to sift through it all or at least to become a savvy expert in a particular standard such as ISO-19157. The other concern is that not all standards are open source (e.g., ISO), thus providing a financial hurdle on top of the already difficult learning curve. A systems engineering approach offers a solution to the current data quality debacle by establishing and promoting a uniform and ubiquitous application of standards and solutions across heterogeneous datasets of many science disciplines. Here I present real-world examples along with both existing and theoretical solutions to known data quality concerns using a NASA-inspired systems engineering approach. Part of the problem is "knowing" the specific data quality concerns, which is why one of the tools I use is a simple "Use Case" template, custom-tailored for data quality. This template is designed with heterogeneity of data quality issues in mind. As an aid to this template is a corresponding "Use Case Response", which provides the systems engineer with an inventory of existing solutions and the degree to which those solutions may meet deliverables required by each use case. The coupling of the "Use Case" with the "Use Case Response" is the primary key to mapping the elements of the knowledge base to the solutions.

  8. Water Resources Data for California, 1969; Part 2: Water Quality Records

    USGS Publications Warehouse

    1970-01-01

    Water-resources investigations of the U.S. Geological Survey include the collection of water-quality data on the chemical and physical characteristics of surface and ground-water supplies of the Nation. Theses data for the 1969 water year for the quality of surface water in California are presented in this report. Data for a few water-quality stations in bordering States are also included. The data were collected by the Water Resources Division of the Geological Survey under the direction of R. Stanley Lord, district chief, Menlo Park, Calif.

  9. Increasing the Use of Earth Science Data and Models in Air Quality Management.

    PubMed

    Milford, Jana B; Knight, Daniel

    2017-04-01

    In 2010, the U.S. National Aeronautics and Space Administration (NASA) initiated the Air Quality Applied Science Team (AQAST) as a 5-year, $17.5-million award with 19 principal investigators. AQAST aims to increase the use of Earth science products in air quality-related research and to help meet air quality managers' information needs. We conducted a Web-based survey and a limited number of follow-up interviews to investigate federal, state, tribal, and local air quality managers' perspectives on usefulness of Earth science data and models, and on the impact AQAST has had. The air quality managers we surveyed identified meeting the National Ambient Air Quality Standards for ozone and particulate matter, emissions from mobile sources, and interstate air pollution transport as top challenges in need of improved information. Most survey respondents viewed inadequate coverage or frequency of satellite observations, data uncertainty, and lack of staff time or resources as barriers to increased use of satellite data by their organizations. Managers who have been involved with AQAST indicated that the program has helped build awareness of NASA Earth science products, and assisted their organizations with retrieval and interpretation of satellite data and with application of global chemistry and climate models. AQAST has also helped build a network between researchers and air quality managers with potential for further collaborations. NASA's Air Quality Applied Science Team (AQAST) aims to increase the use of satellite data and global chemistry and climate models for air quality management purposes, by supporting research and tool development projects of interest to both groups. Our survey and interviews of air quality managers indicate they found value in many AQAST projects and particularly appreciated the connections to the research community that the program facilitated. Managers expressed interest in receiving continued support for their organizations' use of satellite data, including assistance in retrieving and interpreting data from future geostationary platforms meant to provide more frequent coverage for air quality and other applications.

  10. Comparison of 2002 Water Year and Historical Water-Quality Data, Upper Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Spahr, N.E.

    2003-01-01

    Introduction: Population growth and changes in land-use practices have the potential to affect water quality and quantity in the upper Gunnison River basin. In 1995, the U.S. Geological Survey (USGS), in cooperation with local sponsors, City of Gunnison, Colorado River Water Conservation District, Crested Butte South Metropolitan District, Gunnison County, Mount Crested Butte Water and Sanitation District, National Park Service, Town of Crested Butte, and Upper Gunnison River Water Conservancy District, established a water-quality monitoring program in the upper Gunnison River basin to characterize current water-quality conditions and to assess the effects of increased urban development and other land-use changes on water quality. The monitoring network has evolved into two groups of stations, stations that are considered as long term and stations that are rotational. The long-term stations are monitored to assist in defining temporal changes in water quality (how conditions have changed over time). The rotational stations are monitored to assist in the spatial definition of water-quality conditions (how conditions differ throughout the basin) and to address local and short term concerns. Another group of stations (rotational group 2) will be chosen and sampled beginning in water year 2004. Annual summaries of the water-quality data from the monitoring network provide a point of reference for discussions regarding water-quality sampling in the upper Gunnison River basin. This summary includes data collected during water year 2002. The introduction provides a map of the sampling locations, definitions of terms, and a one-page summary of selected water-quality conditions at the network stations. The remainder of the summary is organized around the data collected at individual stations. Data collected during water year 2002 are compared to historical data (data collected for this network since 1995), state water-quality standards, and federal water-quality guidelines. Data were collected during water year 2002 following USGS protocols (U.S. Geological Survey, variously dated).

  11. Application of Satellite and Ozonesonde Data to the Study of Nighttime Tropospheric Ozone Impacts and Relationship to Air Quality

    NASA Astrophysics Data System (ADS)

    Osterman, G. B.; Eldering, A.; Neu, J. L.; Tang, Y.; McQueen, J.; Pinder, R. W.

    2011-12-01

    To help protect human health and ecosystems, regional-scale atmospheric chemistry models are used to forecast high ozone events and to design emission control strategies to decrease the frequency and severity of ozone events. Despite the impact that nighttime aloft ozone can have on surface ozone, regional-scale atmospheric chemistry models often do not simulate the nighttime ozone concentrations well and nor do they sufficiently capture the ozone transport patterns. Fully characterizing the importance of the nighttime ozone has been hampered by limited measurements of the vertical distribution of ozone and ozone-precursors. The main focus of this work is to begin to utilize remote sensing data sets to characterize the impact of nighttime aloft ozone to air quality events. We will describe our plans to use NASA satellite data sets, transport models and air quality models to study ozone transport, focusing primarily on nighttime ozone and provide initial results. We will use satellite and ozonesonde data to help understand how well the air quality models are simulating ozone in the lower free troposphere and attempt to characterize the impact of nighttime ozone to air quality events. Our specific objectives are: 1) Characterize nighttime aloft ozone using remote sensing data and sondes. 2) Evaluate the ability of the Community Multi-scale Air Quality (CMAQ) model and the National Air Quality Forecast Capability (NAQFC) model to capture the nighttime aloft ozone and its relationship to air quality events. 3) Analyze a set of air quality events and determine the relationship of air quality events to the nighttime aloft ozone. We will achieve our objectives by utilizing the ozone profile data from the NASA Earth Observing System (EOS) Tropospheric Emission Spectrometer (TES) and other sensors, ozonesonde data collected during the Aura mission (IONS), EPA AirNow ground station ozone data, the CMAQ continental-scale air quality model, and the National Air Quality Forecast model.

  12. Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service

    NASA Astrophysics Data System (ADS)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2016-06-01

    Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.

  13. Using primary care electronic health record data for comparative effectiveness research: experience of data quality assessment and preprocessing in The Netherlands.

    PubMed

    Huang, Yunyu; Voorham, Jaco; Haaijer-Ruskamp, Flora M

    2016-07-01

    Details of data quality and how quality issues were solved have not been reported in published comparative effectiveness studies using electronic health record data. We developed a conceptual framework of data quality assessment and preprocessing and apply it to a study comparing angiotensin-converting enzyme inhibitors with angiotensin receptor blockerss on renal function decline in diabetes patients. The framework establishes a line of thought to identify and act on data issues. The core concept is to evaluate whether data are fit-for-use for research tasks. Possible quality problems are listed through specific signal detections, and verified whether they are true problems. Optimal solutions are selected for the identified problems. This framework can be used in observational studies to improve validity of results.

  14. a Representation-Driven Ontology for Spatial Data Quality Elements, with Orthoimagery as Running Example

    NASA Astrophysics Data System (ADS)

    Hangouët, J.-F.

    2015-08-01

    The many facets of what is encompassed by such an expression as "quality of spatial data" can be considered as a specific domain of reality worthy of formal description, i.e. of ontological abstraction. Various ontologies for data quality elements have already been proposed in literature. Today, the system of quality elements is most generally used and discussed according to the configuration exposed in the "data dictionary for data quality" of international standard ISO 19157. Our communication proposes an alternative view. This is founded on a perspective which focuses on the specificity of spatial data as a product: the representation perspective, where data in the computer are meant to show things of the geographic world and to be interpreted as such. The resulting ontology introduces new elements, the usefulness of which will be illustrated by orthoimagery examples.

  15. 42 CFR 480.143 - QIO involvement in shared health data systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...

  16. 42 CFR 480.143 - QIO involvement in shared health data systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...

  17. 42 CFR 480.143 - QIO involvement in shared health data systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...

  18. An Air Quality Data Analysis System for Interrelating Effects, Standards and Needed Source Reductions

    ERIC Educational Resources Information Center

    Larsen, Ralph I.

    1973-01-01

    Makes recommendations for a single air quality data system (using average time) for interrelating air pollution effects, air quality standards, air quality monitoring, diffusion calculations, source-reduction calculations, and emission standards. (JR)

  19. FIA Quality Assurance Program: Evaluation of a Tree Matching Algorithm for Paired Forest Inventory Data

    Treesearch

    James E. Pollard; James A. Westfall; Paul A. Patterson; David L. Gartner

    2005-01-01

    The quality of Forest Inventory and Analysis inventory data can be documented by having quality assurance crews remeasure plots originally measured by field crews within 2 to 3 weeks of the initial measurement, and assessing the difference between the original and remeasured data. Estimates of measurement uncertainty for the data are generated using paired data...

  20. From Board to Bedside: How the Application of Financial Structures to Safety and Quality Can Drive Accountability in a Large Health Care System.

    PubMed

    Austin, J Matthew; Demski, Renee; Callender, Tiffany; Lee, K H Ken; Hoffman, Ann; Allen, Lisa; Radke, Deborah A; Kim, Yungjin; Werthman, Ronald J; Peterson, Ronald R; Pronovost, Peter J

    2017-04-01

    As the health care system in the United States places greater emphasis on the public reporting of quality and safety data and its use to determine payment, provider organizations must implement structures that ensure discipline and rigor regarding these data. An academic health system, as part of a performance management system, applied four key components of a financial reporting structure to support the goal of top-to-bottom accountability for improving quality and safety. The four components implemented by Johns Hopkins Medicine were governance, accountability, reporting of consolidated quality performance statements, and auditing. Governance is provided by the health system's Patient Safety and Quality Board Committee, which reviews goals and strategy for patient safety and quality, reviews quarterly performance for each entity, and holds organizational leaders accountable for performance. An accountability plan includes escalating levels of review corresponding to the number of months an entity misses the defined performance target for a measure. A consolidated quality statement helps inform the Patient Safety and Quality Board Committee and leadership on key quality and safety issues. An audit evaluates the efficiency and effectiveness of processes for data collection, validation, and storage, as to ensure the accuracy and completeness of quality measure reporting. If hospitals and health systems truly want to prioritize improvements in safety and quality, they will need to create a performance management system that ensures data validity and supports performance accountability. Without valid data, it is difficult to know whether a performance gap is due to data quality or clinical quality. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  1. Communicating Instantaneous Air Quality Data: Pilot Project

    EPA Pesticide Factsheets

    Communicating Instantaneous Air Quality Data: Pilot ProjectEPA is launching a pilot project to test a new tool for making instantaneous outdoor air quality data useful for the public. The new “sensor scale” is designed to be used with sensors

  2. 21 CFR 20.114 - Data and information submitted pursuant to cooperative quality assurance agreements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Records § 20.114 Data and information submitted pursuant to cooperative quality assurance agreements. Data and information submitted to the Food and Drug Administration pursuant to a cooperative quality... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Data and information submitted pursuant to...

  3. 21 CFR 20.114 - Data and information submitted pursuant to cooperative quality assurance agreements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Records § 20.114 Data and information submitted pursuant to cooperative quality assurance agreements. Data and information submitted to the Food and Drug Administration pursuant to a cooperative quality... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Data and information submitted pursuant to...

  4. 21 CFR 20.114 - Data and information submitted pursuant to cooperative quality assurance agreements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Records § 20.114 Data and information submitted pursuant to cooperative quality assurance agreements. Data and information submitted to the Food and Drug Administration pursuant to a cooperative quality... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Data and information submitted pursuant to...

  5. 21 CFR 20.114 - Data and information submitted pursuant to cooperative quality assurance agreements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Records § 20.114 Data and information submitted pursuant to cooperative quality assurance agreements. Data and information submitted to the Food and Drug Administration pursuant to a cooperative quality... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Data and information submitted pursuant to...

  6. 75 FR 62026 - Approval and Promulgation of Implementation Plans and Designation of Areas for Air Quality...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... techniques, provisions for the establishment and operation of appropriate devices necessary to collect data... information.) Ambient air quality monitoring data for the 3-year period must meet a data completeness requirement. The ambient air quality monitoring data completeness requirement is met when the percent of days...

  7. The influence of data curation on QSAR Modeling – examining issues of quality versus quantity of data (SOT)

    EPA Science Inventory

    The construction of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available ...

  8. Structured data quality reports to improve EHR data quality.

    PubMed

    Taggart, Jane; Liaw, Siaw-Teng; Yu, Hairong

    2015-12-01

    To examine whether a structured data quality report (SDQR) and feedback sessions with practice principals and managers improve the quality of routinely collected data in EHRs. The intervention was conducted in four general practices participating in the Fairfield neighborhood electronic Practice Based Research Network (ePBRN). Data were extracted from their clinical information systems and summarised as a SDQR to guide feedback to practice principals and managers at 0, 4, 8 and 12 months. Data quality (DQ) metrics included completeness, correctness, consistency and duplication of patient records. Information on data recording practices, data quality improvement, and utility of SDQRs was collected at the feedback sessions at the practices. The main outcome measure was change in the recording of clinical information and level of meeting Royal Australian College of General Practice (RACGP) targets. Birth date was 100% and gender 99% complete at baseline and maintained. DQ of all variables measured improved significantly (p<0.01) over 12 months, but was not sufficient to comply with RACGP standards. Improvement was greatest with allergies. There was no significant change in duplicate records. SDQRs and feedback sessions support general practitioners and practice managers to focus on improving the recording of patient information. However, improved practice DQ, was not sufficient to meet RACGP targets. Randomised controlled studies are required to evaluate strategies to improve data quality and any associated improved safety and quality of care. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Does adding clinical data to administrative data improve agreement among hospital quality measures?

    PubMed

    Hanchate, Amresh D; Stolzmann, Kelly L; Rosen, Amy K; Fink, Aaron S; Shwartz, Michael; Ash, Arlene S; Abdulkerim, Hassen; Pugh, Mary Jo V; Shokeen, Priti; Borzecki, Ann

    2017-09-01

    Hospital performance measures based on patient mortality and readmission have indicated modest rates of agreement. We examined if combining clinical data on laboratory tests and vital signs with administrative data leads to improved agreement with each other, and with other measures of hospital performance in the nation's largest integrated health care system. We used patient-level administrative and clinical data, and hospital-level data on quality indicators, for 2007-2010 from the Veterans Health Administration (VA). For patients admitted for acute myocardial infarction (AMI), heart failure (HF) and pneumonia we examined changes in hospital performance on 30-d mortality and 30-d readmission rates as a result of adding clinical data to administrative data. We evaluated whether this enhancement yielded improved measures of hospital quality, based on concordance with other hospital quality indicators. For 30-d mortality, data enhancement improved model performance, and significantly changed hospital performance profiles; for 30-d readmission, the impact was modest. Concordance between enhanced measures of both outcomes, and with other hospital quality measures - including Joint Commission process measures, VA Surgical Quality Improvement Program (VASQIP) mortality and morbidity, and case volume - remained poor. Adding laboratory tests and vital signs to measure hospital performance on mortality and readmission did not improve the poor rates of agreement across hospital quality indicators in the VA. Efforts to improve risk adjustment models should continue; however, evidence of validation should precede their use as reliable measures of quality. Published by Elsevier Inc.

  10. Water resources data, North Carolina, water year 2004. Volume 2: Ground-water records

    USGS Publications Warehouse

    Howe, S.S.; Breton, P.L.; Chapman, M.J.

    2005-01-01

    Water-resources data for the 2004 water year for North Carolina consist of records of stage, discharge, water quality for streams; stage and contents for lakes and reservoirs; precipitation; and ground-water levels and water quality of ground water. Volume 1 contains discharge records for 217 gaging stations; stage and contents for 58 lakes and reservoirs; stage only records for 22 gaging stations; elevations for 9 stations; water quality for 39 gaging stations and 5 miscellaneous sites, and continuous water quality for 35 sites; and continuous precipitation at 127 sites. Volume 2 contains ground-water-level data from 161 observation wells, ground-water-quality data from 38 wells, continuous water quality for 7 sites and continuous precipitation at 7 sites. Additional water data were collected at 51 sites not involved in the systematic data-collection program, and are published as miscellaneous measurements in Volume 1. The collection of water-resources data in North Carolina is a part of the National Water-Data System operated by the U.S. Geological Survey in cooperation with State, municipal, and Federal agencies.

  11. An Ontology for Telemedicine Systems Resiliency to Technological Context Variations in Pervasive Healthcare

    PubMed Central

    Bults, Richard G. A.; Van Sinderen, Marten J.; Widya, Ing; Hermens, Hermie J.

    2015-01-01

    Clinical data are crucial for any medical case to study and understand a patient’s condition and to give the patient the best possible treatment. Pervasive healthcare systems apply information and communication technology to enable the usage of ubiquitous clinical data by authorized medical persons. However, quality of clinical data in these applications is, to a large extent, determined by the technological context of the patient. A technological context is characterized by potential technological disruptions that affect optimal functioning of technological resources. The clinical data based on input from these technological resources can therefore have quality degradations. If these degradations are not noticed, the use of this clinical data can lead to wrong treatment decisions, which potentially puts the patient’s safety at risk. This paper presents an ontology that specifies the relation among technological context, quality of clinical data, and patient treatment. The presented ontology provides a formal way to represent the knowledge to specify the effect of technological context variations in the clinical data quality and the impact of the clinical data quality on a patient’s treatment. Accordingly, this ontology is the foundation for a quality of data framework that enables the development of telemedicine systems that are capable of adapting the treatment when the quality of the clinical data degrades, and thus guaranteeing patients’ safety even when technological context varies. PMID:27170903

  12. MUSTANG: A Community-Facing Web Service to Improve Seismic Data Quality Awareness Through Metrics

    NASA Astrophysics Data System (ADS)

    Templeton, M. E.; Ahern, T. K.; Casey, R. E.; Sharer, G.; Weertman, B.; Ashmore, S.

    2014-12-01

    IRIS DMC is engaged in a new effort to provide broad and deep visibility into the quality of data and metadata found in its terabyte-scale geophysical data archive. Taking advantage of large and fast disk capacity, modern advances in open database technologies, and nimble provisioning of virtual machine resources, we are creating an openly accessible treasure trove of data measurements for scientists and the general public to utilize in providing new insights into the quality of this data. We have branded this statistical gathering system MUSTANG, and have constructed it as a component of the web services suite that IRIS DMC offers. MUSTANG measures over forty data metrics addressing issues with archive status, data statistics and continuity, signal anomalies, noise analysis, metadata checks, and station state of health. These metrics could potentially be used both by network operators to diagnose station problems and by data users to sort suitable data from unreliable or unusable data. Our poster details what MUSTANG is, how users can access it, what measurements they can find, and how MUSTANG fits into the IRIS DMC's data access ecosystem. Progress in data processing, approaches to data visualization, and case studies of MUSTANG's use for quality assurance will be presented. We want to illustrate what is possible with data quality assurance, the need for data quality assurance, and how the seismic community will benefit from this freely available analytics service.

  13. Quality Control for Interviews to Obtain Dietary Recalls from Children for Research Studies

    PubMed Central

    SHAFFER, NICOLE M.; THOMPSON, WILLIAM O.; BAGLIO, MICHELLE L.; GUINN, CAROLINE H.; FRYE, FRANCESCA H. A.

    2005-01-01

    Quality control is an important aspect of a study because the quality of data collected provides a foundation for the conclusions drawn from the study. For studies that include interviews, establishing quality control for interviews is critical in ascertaining whether interviews are conducted according to protocol. Despite the importance of quality control for interviews, few studies adequately document the quality control procedures used during data collection. This article reviews quality control for interviews and describes methods and results of quality control for interviews from two of our studies regarding the accuracy of children's dietary recalls; the focus is on quality control regarding interviewer performance during the interview, and examples are provided from studies with children. For our two studies, every interview was audio recorded and transcribed. The audio recording and typed transcript from one interview conducted by each research dietitian either weekly or daily were randomly selected and reviewed by another research dietitian, who completed a standardized quality control for interviews checklist. Major strengths of the methods of quality control for interviews in our two studies include: (a) interviews obtained for data collection were randomly selected for quality control for interviews, and (b) quality control for interviews was assessed on a regular basis throughout data collection. The methods of quality control for interviews described may help researchers design appropriate methods of quality control for interviews for future studies. PMID:15389417

  14. 78 FR 42548 - Comment Request for Information Collection for the Benefits, Timeliness, and Quality Data...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-16

    ... BTQ data measure the timeliness and quality of states' administrative actions and administrative... Quality, Core Measure. 9056 Nonmonetary Determination 26 Large States..... 400 10,400 1 10,400 Quality, Core Measure. 9057 Lower Authority Appeals 44 Small States..... 80 3,520 3.5 12,320 Quality, Core...

  15. Detecting, reporting, and analysis of priority diseases for routine public health surveillance in Liberia.

    PubMed

    Frimpong, Joseph Asamoah; Amo-Addae, Maame Pokuah; Adewuyi, Peter Adebayo; Hall, Casey Daniel; Park, Meeyoung Mattie; Nagbe, Thomas Knue

    2017-01-01

    Public health officials depend on timely, complete, and accurate surveillance data for decision making. The quality of data generated from surveillance is highly dependent on external and internal factors which may either impede or enhance surveillance activities. One way of identifying challenges affecting the quality of data generated is to conduct a data quality audit. This case study, based on an audit conducted by residents of the Liberia Frontline Field Epidemiology Training Program, was designed to be a classroom simulation of a data quality audit in a health facility. It is suited to enforce theoretical lectures in surveillance data quality and auditing. The target group is public health trainees, who should be able to complete this exercise in approximately 2 hours and 30 minutes.

  16. Identifying and attributing common data quality problems: temperature and precipitation observations in Bolivia and Peru

    NASA Astrophysics Data System (ADS)

    Hunziker, Stefan; Gubler, Stefanie; Calle, Juan; Moreno, Isabel; Andrade, Marcos; Velarde, Fernando; Ticona, Laura; Carrasco, Gualberto; Castellón, Yaruska; Oria Rojas, Clara; Brönnimann, Stefan; Croci-Maspoli, Mischa; Konzelmann, Thomas; Rohrer, Mario

    2016-04-01

    Assessing climatological trends and extreme events requires high-quality data. However, for many regions of the world, observational data of the desired quality is not available. In order to eliminate errors in the data, quality control (QC) should be applied before data analysis. If the data still contains undetected errors and quality problems after QC, a consequence may be misleading and erroneous results. A region which is seriously affected by observational data quality problems is the Central Andes. At the same time, climatological information on ongoing climate change and climate risks are of utmost importance in this area due to its vulnerability to meteorological extreme events and climatic changes. Beside data quality issues, the lack of metadata and the low station network density complicate quality control and assessment, and hence, appropriate application of the data. Errors and data problems may occur at any point of the data generation chain, e.g. due to unsuitable station configuration or siting, poor station maintenance, erroneous instrument reading, or inaccurate data digitalization and post processing. Different measurement conditions in the predominantly conventional station networks in Bolivia and Peru compared to the mostly automated networks e.g. in Europe or Northern America may cause different types of errors. Hence, applying QC methods used on state of the art networks to Bolivian and Peruvian climate observations may not be suitable or sufficient. A comprehensive amount of Bolivian and Peruvian maximum and minimum temperature and precipitation in-situ measurements were analyzed to detect and describe common data quality problems. Furthermore, station visits and reviews of the original documents were done. Some of the errors could be attributed to a specific source. Such information is of great importance for data users, since it allows them to decide for what applications the data still can be used. In ideal cases, it may even allow to correct the error. Strategies on how to deal with data from the Central Andes will be suggested. However, the approach may be applicable to networks from other countries where conditions of climate observations are comparable.

  17. Are performance indicators used for hospital quality management: a qualitative interview study amongst health professionals and quality managers in The Netherlands.

    PubMed

    Botje, Daan; Ten Asbroek, Guus; Plochg, Thomas; Anema, Helen; Kringos, Dionne S; Fischer, Claudia; Wagner, Cordula; Klazinga, Niek S

    2016-10-13

    Hospitals are under increasing pressure to share indicator-based performance information. These indicators can also serve as a means to promote quality improvement and boost hospital performance. Our aim was to explore hospitals' use of performance indicators for internal quality management activities. We conducted a qualitative interview study among 72 health professionals and quality managers in 14 acute care hospitals in The Netherlands. Concentrating on orthopaedic and oncology departments, our goal was to gain insight into data collection and use of performance indicators for two conditions: knee and hip replacement surgery and breast cancer surgery. The semi-structured interviews were recorded and summarised. Based on the data, themes were synthesised and the analyses were executed systematically by two analysts independently. The findings were validated through comparison. The hospitals we investigated collect data for performance indicators in different ways. Similarly, these hospitals have different ways of using such data to support their quality management, while some do not seem to use the data for this purpose at all. Factors like 'linking pin champions', pro-active quality managers and engaged medical specialists seem to make a difference. In addition, a comprehensive hospital data infrastructure with electronic patient records and robust data collection software appears to be a prerequisite to produce reliable external performance indicators for internal quality improvement. Hospitals often fail to use performance indicators as a means to support internal quality management. Such data, then, are not used to its full potential. Hospitals are recommended to focus their human resource policy on 'linking pin champions', the engagement of professionals and a pro-active quality manager, and to invest in a comprehensive data infrastructure. Furthermore, the differences in data collection processes between Dutch hospitals make it difficult to draw comparisons between outcomes of performance indicators.

  18. Information Quality as a Foundation for User Trustworthiness of Earth Science Data.

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Moroni, D. F.; Ramapriyan, H.; Peng, G.

    2017-12-01

    Information quality is multidimensional. Four different aspects of information quality can be defined based on the lifecycle stages of Earth Science data products: science, product, stewardship and services. With increasing requirements on ensuring and improving information quality coming from multiple government agencies and throughout industry, there have been considerable efforts toward improving information quality during the last decade, much of which has not been well vetted in a collective sense until recently. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) in 2011, and reactivated in the summer of 2014, has been active with membership from multiple organizations. The IQC's objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are also considered vital toward improving the trustworthiness of Earth science data to a vast and interdisciplinary community of data users. During 2016, several members of the IQC have led the development and assessment of four use cases. This was followed up in 2017 with multiple panel sessions at the 2017 Winter and Summer ESIP Meetings to survey the challenges posed in the various aspects of information quality. What was discovered to be most lacking is the transparency of data lineage (i.e., provenance and maturity), uniform methods for uncertainty characterization, and uniform quality assurance data and metadata. While solutions to these types of issues exist, most data producers have little time to investigate and collaborate to arrive at and conform to a consensus approach. The IQC has positioned itself as a community platform to bring together all relevant stakeholders from data producers, repositories, program managers, and the end users. A combination of both well-vetted and "trailblazing" solutions are presented to address how data trustworthiness can be elevated and maintained through optimized extraction, curation, and dissemination of information quality artifacts.

  19. Assessing the impact of continuous quality improvement/total quality management: concept versus implementation.

    PubMed Central

    Shortell, S M; O'Brien, J L; Carman, J M; Foster, R W; Hughes, E F; Boerstler, H; O'Connor, E J

    1995-01-01

    OBJECTIVE: This study examines the relationships among organizational culture, quality improvement processes and selected outcomes for a sample of up to 61 U. S. hospitals. DATA SOURCES AND STUDY SETTING: Primary data were collected from 61 U. S. hospitals (located primarily in the midwest and the west) on measures related to continuous quality improvement/total quality management (CQI/TQM), organizational culture, implementation approaches, and degree of quality improvement implementation based on the Baldrige Award criteria. These data were combined with independently collected data on perceived impact and objective measures of clinical efficiency (i.e., charges and length of stay) for six clinical conditions. STUDY DESIGN: The study involved cross-sectional examination of the named relationships. DATA COLLECTION/EXTRACTION METHODS: Reliable and valid scales for the organizational culture and quality improvement implementation measures were developed based on responses from over 7,000 individuals across the 61 hospitals with an overall completion rate of 72 percent. Independent data on perceived impact were collected from a national survey and independent data on clinical efficiency from a companion study of managed care. PRINCIPAL FINDINGS: A participative, flexible, risk-taking organizational culture was significantly related to quality improvement implementation. Quality improvement implementation, in turn, was positively associated with greater perceived patient outcomes and human resource development. Larger-size hospitals experienced lower clinical efficiency with regard to higher charges and higher length of stay, due in part to having more bureaucratic and hierarchical cultures that serve as a barrier to quality improvement implementation. CONCLUSIONS: What really matters is whether or not a hospital has a culture that supports quality improvement work and an approach that encourages flexible implementation. Larger-size hospitals face more difficult challenges in this regard. PMID:7782222

  20. Quality control system preparation for photogrammetric and laser scanning missions of Spanish national plan of aerial orthophotogpaphy (PNOA). (Polish Title: Opracowanie systemu kontroli jakości realizacji nalotów fotogrametrycznych i skaningowych dla hiszpańskiego narodowego planu ortofotomapy lotniczej (PNOA))

    NASA Astrophysics Data System (ADS)

    Rzonca, A.

    2013-12-01

    The paper presents the state of the art of quality control of photogrammetric and laser scanning data captured by airborne sensors. The described subject is very important for photogrammetric and LiDAR project execution, because the data quality a prior decides about the final product quality. On the other hand, precise and effective quality control process allows to execute the missions without wide margin of safety, especially in case of the mountain areas projects. For introduction, the author presents theoretical background of the quality control, basing on his own experience, instructions and technical documentation. He describes several variants of organization solutions. Basically, there are two main approaches: quality control of the captured data and the control of discrepancies of the flight plan and its results of its execution. Both of them are able to use test of control and analysis of the data. The test is an automatic algorithm controlling the data and generating the control report. Analysis is a less complicated process, that is based on documentation, data and metadata manual check. The example of quality control system for large area project was presented. The project is being realized periodically for the territory of all Spain and named National Plan of Aerial Orthophotography (Plan Nacional de Ortofotografía Aérea, PNOA). The system of the internal control guarantees its results soon after the flight and informs the flight team of the company. It allows to correct all the errors shortly after the flight and it might stop transferring the data to another team or company, for further data processing. The described system of data quality control contains geometrical and radiometrical control of photogrammetric data and geometrical control of LiDAR data. According to all specified parameters, it checks all of them and generates the reports. They are very helpful in case of some errors or low quality data. The paper includes the author experience in the field of data quality control, presents the conclusions and suggestions of the organization and technical aspects, with a short definition of the necessary control software.

  1. 42 CFR 480.114 - Limitation on data collection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Qio Access to Information § 480.114 Limitation on data collection. A QIO or any agent, organization, or institution acting...

  2. 42 CFR 480.114 - Limitation on data collection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Qio Access to Information § 480.114 Limitation on data collection. A QIO or any agent, organization, or...

  3. 42 CFR 480.114 - Limitation on data collection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Qio Access to Information § 480.114 Limitation on data collection. A QIO or any agent, organization, or institution acting...

  4. 42 CFR 480.143 - QIO involvement in shared health data systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...

  5. 42 CFR 480.114 - Limitation on data collection.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Qio Access to Information § 480.114 Limitation on data collection. A QIO or any agent, organization, or institution acting...

  6. 42 CFR 480.143 - QIO involvement in shared health data systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Disclosure of Confidential Information § 480.143 QIO involvement in shared health data...

  7. 42 CFR 480.114 - Limitation on data collection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) Qio Access to Information § 480.114 Limitation on data collection. A QIO or any agent, organization, or...

  8. DATA QUALITY OBJECTIVES AND MEASUREMENT QUALITY OBJECTIVES FOR RESEARCH PROJECTS

    EPA Science Inventory

    The paper provides assistance with systematic planning using measurement quality objectives to those working on research projects. These performance criteria are more familiar to researchers than data quality objectives because they are more closely associated with the measuremen...

  9. Data Quality Control of the French Permanent Broadband Network in the RESIF Framework

    NASA Astrophysics Data System (ADS)

    Grunberg, Marc; Lambotte, Sophie; Engels, Fabien; Dretzen, Remi; Hernandez, Alain

    2014-05-01

    In the framework of the RESIF (Réseau Sismologique et géodésique Français) project, a new information system is being setting up, allowing the improvement of the management and the distribution of high quality data from the different elements of RESIF and the associated networks. Within this information system, EOST (in Strasbourg) is in charge of collecting real-time permanent broadband seismic waveform, and performing Quality Control on these data. The real-time and validated data set are pushed to the French National Distribution Center (Isterre/Grenoble) in order to make them publicly available. Furthermore EOST hosts the BCSF-ReNaSS, in charge of the French metropolitan seismic bulletin. This allows to benefit from some high-end quality control based on the national and world-wide seismicity. Here we present first the real-time seismic data flow from the stations of the French National Broad Band Network to EOST, and then, the data Quality Control procedures that were recently installed, including some new developments. The data Quality Control consists in applying a variety of subprocesses to check the consistency of the whole system and process from the stations to the data center. This allows us to verify that instruments and data transmission are operating correctly. Moreover analysis of the ambient noise helps to characterize intrinsic seismic quality of the stations and to identify other kind of disturbances. The deployed Quality Control consist in a pipeline that starts with low-level procedures : check the real-time miniseed data file (file naming convention, data integrity), check for inconsistencies between waveform and meta-data (channel name, sample rate, etc.), compute waveform statistics (data availability, gap/overlap, mean, rms, time quality, spike). It is followed by some high-level procedures such as : power spectral density computation (PSD), STA/LTA computation to be correlated to the seismicity, phases picking and stations magnitudes discrepancies. The results of quality control is visualized through a web interface. This latter gathers data from different information systems to provide a global view on last events that could impact the data (like intervention on site or seismic events, etc.). This work is still an ongoing project. We intend to add more sophisticated procedures to enhanced our data Quality Control. Among them, we will deploy a seismic moment tensor inversion tool for amplitude, time and polarity control and a noise correlation procedure for time drift detections.

  10. The data quality analyzer: A quality control program for seismic data

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.

    2015-03-01

    The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.

  11. The Effect of Structural Quality on Fatigue Life in 319 Aluminum Alloy Castings

    NASA Astrophysics Data System (ADS)

    Özdeş, Hüseyin; Tiryakioğlu, Murat

    2017-02-01

    Tensile and fatigue life data for 319 aluminum alloy from seventeen datasets reported in four independent studies from the literature have been reanalyzed. Analysis of fatigue life data involved mean stress correction for different R ratios used in fatigue testing, inclusion of survival (runout) data along with failure data, as well as volumetric correction for Weibull distributions for different specimen sizes used in these studies. Tensile data have been transformed into the structural quality index, Q T, which is used as a measure of the structural quality of castings. A distinct relationship has been observed between the expected fatigue life and mean quality index. Moreover, fatigue strengths at 104 and 106 cycles have been found increase with quality index, providing further evidence about the relationship observed between structural quality and fatigue performance. Empirical equations between Basquin parameters and structural quality index have been developed. The use of the comprehensive methodology to estimate fatigue life is demonstrated with an example.

  12. Quality assurance of weather data for agricultural system model input

    USDA-ARS?s Scientific Manuscript database

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  13. HOW GOOD ARE MY DATA? INFORMATION QUALITY ASSESSMENT METHODOLOGY

    EPA Science Inventory


    Quality assurance techniques used in software development and hardware maintenance/reliability help to ensure that data in a computerized information management system are maintained well. However, information workers may not know the quality of the data resident in their inf...

  14. 43 CFR 3430.4-4 - Environmental costs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and analyzing baseline data on surface water quality and quantity (collecting and analyzing samples...). (2) Groundwater—costs of collecting and analyzing baseline data on groundwater quality and quantity... analyzing baseline air quality data (purchasing rain, air direction, and wind guages and air samplers and...

  15. 43 CFR 3430.4-4 - Environmental costs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and analyzing baseline data on surface water quality and quantity (collecting and analyzing samples...). (2) Groundwater—costs of collecting and analyzing baseline data on groundwater quality and quantity... analyzing baseline air quality data (purchasing rain, air direction, and wind guages and air samplers and...

  16. 43 CFR 3430.4-4 - Environmental costs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and analyzing baseline data on surface water quality and quantity (collecting and analyzing samples...). (2) Groundwater—costs of collecting and analyzing baseline data on groundwater quality and quantity... analyzing baseline air quality data (purchasing rain, air direction, and wind guages and air samplers and...

  17. 43 CFR 3430.4-4 - Environmental costs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and analyzing baseline data on surface water quality and quantity (collecting and analyzing samples...). (2) Groundwater—costs of collecting and analyzing baseline data on groundwater quality and quantity... analyzing baseline air quality data (purchasing rain, air direction, and wind guages and air samplers and...

  18. Streamflow and Water-Quality Data for Three Major Tributaries to Reelfoot Lake, West Tennessee, October 1987-March 1988

    DTIC Science & Technology

    1988-01-01

    STREMFLOW AND WATER-QUALITY DATA FOR. THREE MAJOR TRIBUTARIES TO REELFOOT LAKE , WEST TENNESSEE, OCTOBER 19874&4JXCH 1988 Prepared ill...Water-Quality Data for Three Major Tributaries to Reelfoot Lake , West Tennessee, October 1987-March 1988 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 STREAMFLOW AND WATER-QUALITY DATA FOR THREE MAJOR TRIBUTARIES TO REELFOOT LAKE , WEST

  19. Improving the Quality of Positive Datasets for the Establishment of Machine Learning Models for pre-microRNA Detection.

    PubMed

    Demirci, Müşerref Duygu Saçar; Allmer, Jens

    2017-07-28

    MicroRNAs (miRNAs) are involved in the post-transcriptional regulation of protein abundance and thus have a great impact on the resulting phenotype. It is, therefore, no wonder that they have been implicated in many diseases ranging from virus infections to cancer. This impact on the phenotype leads to a great interest in establishing the miRNAs of an organism. Experimental methods are complicated which led to the development of computational methods for pre-miRNA detection. Such methods generally employ machine learning to establish models for the discrimination between miRNAs and other sequences. Positive training data for model establishment, for the most part, stems from miRBase, the miRNA registry. The quality of the entries in miRBase has been questioned, though. This unknown quality led to the development of filtering strategies in attempts to produce high quality positive datasets which can lead to a scarcity of positive data. To analyze the quality of filtered data we developed a machine learning model and found it is well able to establish data quality based on intrinsic measures. Additionally, we analyzed which features describing pre-miRNAs could discriminate between low and high quality data. Both models are applicable to data from miRBase and can be used for establishing high quality positive data. This will facilitate the development of better miRNA detection tools which will make the prediction of miRNAs in disease states more accurate. Finally, we applied both models to all miRBase data and provide the list of high quality hairpins.

  20. Quality Measures for Hospice and Palliative Care: Piloting the PEACE Measures

    PubMed Central

    Rokoske, Franziska S.; Durham, Danielle; Cagle, John G.; Hanson, Laura C.

    2014-01-01

    Abstract Background: The Carolinas Center for Medical Excellence launched the PEACE project in 2006, under contract with the Centers for Medicare & Medicaid Services (CMS), to identify, develop, and pilot test quality measures for hospice and palliative care programs. Objectives: The project collected pilot data to test the usability and feasibility of potential quality measures and data collection processes for hospice and palliative care programs. Settings/subjects: Twenty-two hospices participating in a national Quality Improvement Collaborative (QIC) submitted data from 367 chart reviews for pain care and 45 chart reviews for nausea care. Fourteen additional hospices completed a one-time data submission of 126 chart reviews on 60 potential patient-level quality measures across eight domains of care and an organizational assessment evaluating structure and processes of care. Design: Usability was assessed by examining the range, variability and size of the populations targeted by each quality measure. Feasibility was assessed during the second pilot study by surveying data abstractors about the abstraction process and examining the rates of missing data. The impact of data collection processes was assessed by comparing results obtained using different processes. Results: Measures shown to be both usable and feasible included: screening for physical symptoms on admission and documentation of treatment preferences. Methods of data collection and measure construction appear to influence observed rates of quality of care. Conclusions: We successfully identified quality measures with potential for use in hospices and palliative care programs. Future research is needed to understand whether these measures are sensitive to quality improvement interventions. PMID:24921162

  1. Quality of Big Data in Healthcare

    DOE PAGES

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    2015-01-01

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  2. Quality of Big Data in Healthcare

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  3. Hydrologic and water-quality data, Honey Creek State Natural Area, Comal County, Texas, August 2001-September 2003

    USGS Publications Warehouse

    Slattery, Richard N.; Furlow, Allen L.; Ockerman, Darwin J.

    2006-01-01

    The U.S. Geological Survey collected rainfall, streamflow, evapotranspiration, and rainfall and stormflow water-quality data from seven sites in two adjacent watersheds in the Honey Creek State Natural Area, Comal County, Texas, during August 2001–September 2003, in cooperation with the U.S. Department of Agriculture, Natural Resources Conservation Service, and the San Antonio Water System. Data collected during this period represent baseline hydrologic and water-quality conditions before proposed removal of ashe juniper (Juniperus ashei) from one of the two watersheds. Juniper removal is intended as a best-management practice to increase water quantity (aquifer recharge and streamflow) and to protect water quality. Continuous (5-minute interval) rainfall data are collected at four sites; continuous (5-minute interval) streamflow data are collected at three sites. Fifteen-minute averages of meteorological and solar-energy-related data recorded at two sites are used to compute moving 30-minute evapotranspiration values on the basis of the energy-balance Bowen ratio method. Periodic rainfall water-quality data are collected at one site and stormflow water-quality data at three sites. Daily rainfall, streamflow, and evapotranspiration totals are presented in tables; detailed data are listed in an appendix. Results of analyses of the periodic rainfall and stormflow water-quality samples collected during runoff events are summarized in the appendix; not all data types were collected at all sites nor were all data types collected during the entire 26-month period.

  4. Quality Assurance of Real-Time Oceanographic Data from the Cabled Array of the Ocean Observatories Initiative

    NASA Astrophysics Data System (ADS)

    Kawka, O. E.; Nelson, J. S.; Manalang, D.; Kelley, D. S.

    2016-02-01

    The Cabled Array component of the NSF-funded Ocean Observatories Initiative (OOI) provides access to real-time physical, chemical, geological, and biological data from water column and seafloor platforms/instruments at sites spanning the southern half of the Juan de Fuca Plate. The Quality Assurance (QA) program for OOI data is designed to ensure that data products meet OOI science requirements. This overall data QA plan establishes the guidelines for assuring OOI data quality and summarizes Quality Control (QC) protocols and procedures, based on best practices, which can be utilized to ensure the highest quality data across the OOI program. This presentation will highlight, specifically, the QA/QC approach being utilized for the OOI Cabled Array infrastructure and data and will include a summary of both shipboard and shore-based protocols currently in use. Aspects addressed will be pre-deployment instrument testing and calibration checks, post-deployment and pre-recovery field verification of data, and post-recovery "as-found" testing of instruments. Examples of QA/QC data will be presented and specific cases of cabled data will be discussed in the context of quality assessments and adjustment/correction of OOI datasets overall for inherent sensor drift and/or instrument fouling.

  5. Development and Validation of a High-Quality Composite Real-World Mortality Endpoint.

    PubMed

    Curtis, Melissa D; Griffith, Sandra D; Tucker, Melisa; Taylor, Michael D; Capra, William B; Carrigan, Gillis; Holzman, Ben; Torres, Aracelis Z; You, Paul; Arnieri, Brandon; Abernethy, Amy P

    2018-05-14

    To create a high-quality electronic health record (EHR)-derived mortality dataset for retrospective and prospective real-world evidence generation. Oncology EHR data, supplemented with external commercial and US Social Security Death Index data, benchmarked to the National Death Index (NDI). We developed a recent, linkable, high-quality mortality variable amalgamated from multiple data sources to supplement EHR data, benchmarked against the highest completeness U.S. mortality data, the NDI. Data quality of the mortality variable version 2.0 is reported here. For advanced non-small-cell lung cancer, sensitivity of mortality information improved from 66 percent in EHR structured data to 91 percent in the composite dataset, with high date agreement compared to the NDI. For advanced melanoma, metastatic colorectal cancer, and metastatic breast cancer, sensitivity of the final variable was 85 to 88 percent. Kaplan-Meier survival analyses showed that improving mortality data completeness minimized overestimation of survival relative to NDI-based estimates. For EHR-derived data to yield reliable real-world evidence, it needs to be of known and sufficiently high quality. Considering the impact of mortality data completeness on survival endpoints, we highlight the importance of data quality assessment and advocate benchmarking to the NDI. © 2018 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  6. Water Resources Data, Georgia, 2002--Volume 1: Continuous water-level, streamflow, water-quality data, and periodic water-quality data, Water Year 2002

    USGS Publications Warehouse

    Hickey, Andrew C.; Kerestes, John F.; McCallum, Brian E.

    2002-01-01

    Water resources data for the 2002 water year for Georgia consists of records of stage, discharge, and water quality of streams; and the stage and contents of lakes and reservoirs published in two volumes in a digital format on a CD-ROM. Volume one of this report contains water resources data for Georgia collected during water year 2002, including: discharge records of 154 gaging stations; stage for 165 gaging stations; precipitation for 105 gaging stations; information for 20 lakes and reservoirs; continuous water-quality records for 27 stations; the annual peak stage and annual peak discharge for 72 crest-stage partial-record stations; and miscellaneous streamflow measurements at 50 stations, and miscellaneous water-quality data recorded by the NAWQA program in Georgia. Volume two of this report contains water resources data for Georgia collected during calendar year 2002, including continuous water-level records of 155 ground-water wells and periodic records at 132 water-quality stations. These data represent that part of the National Water Data System collected by the U.S. Geological Survey and cooperating State and Federal agencies in Georgia.

  7. Water Resources Data, Georgia, 2003, Volume 1: Continuous water-level, streamflow, water-quality data, and periodic water-quality data, Water Year 2003

    USGS Publications Warehouse

    Hickey, Andrew C.; Kerestes, John F.; McCallum, Brian E.

    2004-01-01

    Water resources data for the 2003 water year for Georgia consists of records of stage, discharge, and water quality of streams; and the stage and contents of lakes and reservoirs published in two volumes in a digital format on a CD-ROM. Volume one of this report contains water resources data for Georgia collected during water year 2003, including: discharge records of 163 gaging stations; stage for 187 gaging stations; precipitation for 140 gaging stations; information for 19 lakes and reservoirs; continuous water-quality records for 40 stations; the annual peak stage and annual peak discharge for 65 crest-stage partial-record stations; and miscellaneous streamflow measurements at 36 stations, and miscellaneous water-quality data at 162 stations in Georgia. Volume two of this report contains water resources data for Georgia collected during calendar year 2003, including continuous water-level records of 156 ground-water wells and periodic records at 130 water-quality stations. These data represent that part of the National Water Data System collected by the U.S. Geological Survey and cooperating State and Federal agencies in Georgia.

  8. 40 CFR 58.2 - Purpose.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... QUALITY SURVEILLANCE General Provisions § 58.2 Purpose. (a) This part contains requirements for measuring ambient air quality and for reporting ambient air quality data and related information. The monitoring criteria pertain to the following areas: (1) Quality assurance procedures for monitor operation and data...

  9. 40 CFR 58.2 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... QUALITY SURVEILLANCE General Provisions § 58.2 Purpose. (a) This part contains requirements for measuring ambient air quality and for reporting ambient air quality data and related information. The monitoring criteria pertain to the following areas: (1) Quality assurance procedures for monitor operation and data...

  10. 40 CFR 58.2 - Purpose.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... QUALITY SURVEILLANCE General Provisions § 58.2 Purpose. (a) This part contains requirements for measuring ambient air quality and for reporting ambient air quality data and related information. The monitoring criteria pertain to the following areas: (1) Quality assurance procedures for monitor operation and data...

  11. 40 CFR 58.2 - Purpose.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... QUALITY SURVEILLANCE General Provisions § 58.2 Purpose. (a) This part contains requirements for measuring ambient air quality and for reporting ambient air quality data and related information. The monitoring criteria pertain to the following areas: (1) Quality assurance procedures for monitor operation and data...

  12. Operational CryoSat Product Quality Assessment

    NASA Astrophysics Data System (ADS)

    Mannan, Rubinder; Webb, Erica; Hall, Amanda; Bouzinac, Catherine

    2013-12-01

    The performance and quality of the CryoSat data products are routinely assessed by the Instrument Data quality Evaluation and Analysis Service (IDEAS). This information is then conveyed to the scientific and user community in order to allow them to utilise CryoSat data with confidence. This paper presents details of the Quality Control (QC) activities performed for CryoSat products under the IDEAS contract. Details of the different QC procedures and tools deployed by IDEAS to assess the quality of operational data are presented. The latest updates to the Instrument Processing Facility (IPF) for the Fast Delivery Marine (FDM) products and the future update to Baseline-C are discussed.

  13. Water-quality data-collection activities in Colorado and Ohio; Phase II, Evaluation of 1984 field and laboratory quality-assurance practices

    USGS Publications Warehouse

    Childress, Carolyn J. Oblinger; Chaney, Thomas H.; Myers, Donna; Norris, J. Michael; Hren, Janet

    1987-01-01

    Serious questions have been raised by Congress about the usefulness of water-quality data for addressing issues of regional and national scope and, especially, for characterizing the current quality of the Nation's streams and ground water. In response, the U.S. Geological Survey has undertaken a pilot study in Colorado and Ohio to (1) determine the characteristics of current (1984) water-quality data-collection activities of Federal, regional, State, and local agencies, and academic institutions; and (2) determine how well the data from these activities, collected for various purposes and using different procedures, can be used to improve our ability to answer major broad-scope questions, such as:A. What are (or were) natural or near-natural water-quality conditions?B. What are existing water-quality conditions?C. How has water quality changed, and how do the changes relate to human activities?Colorado and Ohio were chosen for the pilot study largely because they represent regions with different types of waterquality concerns and programs. The study has been divided into three phases, the objectives of which are: Phase I--Inventory water-quality data-collection programs, including costs, and identify those programs that met a set of broad criteria for producing data that are potentially appropriate for water-quality assessments of regional and national scope. Phase II--Evaluate the quality assurance of field and laboratory procedures used in producing the data from programs that met the broad criteria of Phase I. Phase III--Compile the qualifying data and evaluate the adequacy of this data base for addressing selected water-quality questions of regional and national scope.Water-quality data are collected by a large number of organizations for diverse purposes ranging from meeting statutory requirements to research on water chemistry. Combining these individual data bases is an appealing and potentially cost-effective way to attempt to develop a data base adequate for regional or national water-quality assessments. However, to combine data from diverse sources, field and laboratory procedures used to produce the data need to be equivalent and need to meet specific qualityassurance standards. It is these factors that are the focus of Phase II, which is described in this report. In the first phase of this study, an inventory was made of all public organizations and academic institutions that undertook water-quality data-collection activities in Colorado and Ohio in 1984. Water-quality programs identified in Phase I were tested against a set of broad screening criteria. A total of 44 waterquality programs in Colorado and 29 programs in Ohio passed the Phase-I screen and were examined in Phase II. These programs accounted for an estimated 165,000 analyses in Colorado and 76,300 analyses in Ohio for 20 selected constituents and properties. Although qualifying programs included both surface- and ground-water sampling, they emphasized surface waters and produced few groundwater analyses (3,660 for Colorado and 470 for Ohio). For Phase II, information about field and laboratory qualityassurance practices was provided by each organization and its supporting laboratories through questionnaires. This information was evaluated against a set of specific criteria for field and laboratory practices. The criteria were developed from guidelines published by public agencies and professional organizations such as the American Public Health Association, the U.Sc, Environmental Protection Agency, and the U.S. Geological Survey. Each of the eight criteria that comprise the Phase-II screen fall into one of two major categories--field practices or laboratory practices.

  14. 2008 Niday Perinatal Database quality audit: report of a quality assurance project.

    PubMed

    Dunn, S; Bottomley, J; Ali, A; Walker, M

    2011-12-01

    This quality assurance project was designed to determine the reliability, completeness and comprehensiveness of the data entered into Niday Perinatal Database. Quality of the data was measured by comparing data re-abstracted from the patient record to the original data entered into the Niday Perinatal Database. A representative sample of hospitals in Ontario was selected and a random sample of 100 linked mother and newborn charts were audited for each site. A subset of 33 variables (representing 96 data fields) from the Niday dataset was chosen for re-abstraction. Of the data fields for which Cohen's kappa statistic or intraclass correlation coefficient (ICC) was calculated, 44% showed substantial or almost perfect agreement (beyond chance). However, about 17% showed less than 95% agreement and a kappa or ICC value of less than 60% indicating only slight, fair or moderate agreement (beyond chance). Recommendations to improve the quality of these data fields are presented.

  15. Water resources data Virginia water year 2005 Volume 1. Surface-water discharge and surface-water quality records

    USGS Publications Warehouse

    Wicklein, Shaun M.; Powell, Eugene D.; Guyer, Joel R.; Owens, Joseph A.

    2006-01-01

    Water-resources data for the 2005 water year for Virginia includes records of stage, discharge, and water quality of streams and stage, contents, and water quality of lakes and reservoirs. This volume contains records for water discharge at 172 gaging stations; stage only at 2 gaging stations; elevation at 2 reservoirs and 2 tide gages; contents at 1 reservoir, and water quality at 25 gaging stations. Also included are data for 50 crest-stage partial-record stations. Locations of these sites are shown on figures 4A-B and 5A-B. Miscellaneous hydrologic data were collected at 128 measuring sites and 19 water-quality sampling sites not involved in the systematic data-collection program. The data in this report represent that part of the National Water Data System collected by the U.S. Geological Survey and cooperating State and Federal agencies in Virginia.

  16. Water-quality, bed-sediment, and biological data (October 2012 through September 2013) and statistical summaries of data for streams in the Clark Fork Basin, Montana

    USGS Publications Warehouse

    Dodge, Kent A.; Hornberger, Michelle I.; Dyke, Jessica

    2014-01-01

    This report presents the analytical results and quality-assurance data for water-quality, bed-sediment, and biota samples collected at sites from October 2012 through September 2013. Water-quality data include concentrations of selected major ions, trace elements, and suspended sediment. Turbidity and dissolved organic carbon were analyzed for water samples collected at the four sites where seasonal daily values of turbidity were being determined. Daily values of mean suspended-sediment concentration and suspended-sediment discharge were determined for four sites. Bed-sediment data include trace-element concentrations in the fine-grained fraction. Biological data include trace-element concentrations in whole-body tissue of aquatic benthic insects. Statistical sum-maries of water-quality, bed-sediment, and biological data for sites in the upper Clark Fork Basin are provided for the period of record.

  17. Analysis of water quality in the Blue River watershed, Colorado, 1984 through 2007

    USGS Publications Warehouse

    Bauch, Nancy J.; Miller, Lisa D.; Yacob, Sharon

    2014-01-01

    Water quality of streams, reservoirs, and groundwater in the Blue River watershed in the central Rocky Mountains of Colorado has been affected by local geologic conditions, historical hard-rock metal mining, and recent urban development. With these considerations, the U.S. Geological Survey, in cooperation with the Summit Water Quality Committee, conducted a study to compile historical water-quality data and assess water-quality conditions in the watershed. To assess water-quality conditions, stream data were primarily analyzed from October 1995 through December 2006, groundwater data from May 1996 through September 2004, and reservoir data from May 1984 through November 2007. Stream data for the Snake River, upper Blue River, and Tenmile Creek subwatersheds upstream from Dillon Reservoir and the lower Blue River watershed downstream from Dillon Reservoir were analyzed separately. (The complete abstract is provided in the report)

  18. DATA QUALITY OBJECTIVES IN RESEARCH PLANNING AT MED: THREE CASE STUDIES

    EPA Science Inventory

    This course will give a quality assurance perspective to research planning by describing the Data Quality Objective Process....Written plans are mandatory for all EPA environmental data collection activities according to EPA Order 5360.1 CHG 1 and Federal Acquisition Regulations,...

  19. State of the practice for traffic data quality : traffic data quality workshop : white paper.

    DOT National Transportation Integrated Search

    2002-12-31

    This White Paper documents the current state of the practice in the quality of traffic data generated by Intelligent Transportation Systems (ITS). The current state of the practice is viewed from the perspectives of both Operations and Planning perso...

  20. Quality of life from the perspective of the palliative care patient in a resource-poor community in South Africa.

    PubMed

    Jansen van Rensburg, Jacoba J M; Maree, Johanna E; van Belkum, Corrien

    2013-02-01

    Quality of life is an ill-defined term, as it means different things to different people. Quality of life has been well researched, especially with respect to people with cancer, but not necessarily from the perspective of the patient, and also, not in Third World, resource-poor countries. The objective of this study was to explore quality of life from the perspective of palliative care patients managed at a palliative care clinic serving a resource-poor community in Tshwane, South Africa. An exploratory, qualitative phenomenological study was conducted. The target population for this study was all patients managed at a palliative care clinic serving a resource-poor community in Tshwane. Self-report data were gathered by means of in-depth interviews. The data were analyzed using a template analysis style as well as content analysis using open coding. Data analysis was done concurrently with data gathering. Data saturation was reached after 10 interviews (n = 10). Three themes arose from the data: factors that had a positive influence on quality of life, factors that had a negative influence on quality of life, and experience of quality of life. Work played the most important role in quality of life whereas only one participant linked symptom control with quality of life. Experiencing symptoms, rejection, and stigmatization had a negative influence on quality of life. Friends and religion played a significant role and added to quality of life. Life was a daily struggle for survival. Poverty was so overwhelming that quality of life was primarily measured in terms of the ability to buy food and other basic commodities.

  1. Implementation and results of an integrated data quality assurance protocol in a randomized controlled trial in Uttar Pradesh, India.

    PubMed

    Gass, Jonathon D; Misra, Anamika; Yadav, Mahendra Nath Singh; Sana, Fatima; Singh, Chetna; Mankar, Anup; Neal, Brandon J; Fisher-Bowman, Jennifer; Maisonneuve, Jenny; Delaney, Megan Marx; Kumar, Krishan; Singh, Vinay Pratap; Sharma, Narender; Gawande, Atul; Semrau, Katherine; Hirschhorn, Lisa R

    2017-09-07

    There are few published standards or methodological guidelines for integrating Data Quality Assurance (DQA) protocols into large-scale health systems research trials, especially in resource-limited settings. The BetterBirth Trial is a matched-pair, cluster-randomized controlled trial (RCT) of the BetterBirth Program, which seeks to improve quality of facility-based deliveries and reduce 7-day maternal and neonatal mortality and maternal morbidity in Uttar Pradesh, India. In the trial, over 6300 deliveries were observed and over 153,000 mother-baby pairs across 120 study sites were followed to assess health outcomes. We designed and implemented a robust and integrated DQA system to sustain high-quality data throughout the trial. We designed the Data Quality Monitoring and Improvement System (DQMIS) to reinforce six dimensions of data quality: accuracy, reliability, timeliness, completeness, precision, and integrity. The DQMIS was comprised of five functional components: 1) a monitoring and evaluation team to support the system; 2) a DQA protocol, including data collection audits and targets, rapid data feedback, and supportive supervision; 3) training; 4) standard operating procedures for data collection; and 5) an electronic data collection and reporting system. Routine audits by supervisors included double data entry, simultaneous delivery observations, and review of recorded calls to patients. Data feedback reports identified errors automatically, facilitating supportive supervision through a continuous quality improvement model. The five functional components of the DQMIS successfully reinforced data reliability, timeliness, completeness, precision, and integrity. The DQMIS also resulted in 98.33% accuracy across all data collection activities in the trial. All data collection activities demonstrated improvement in accuracy throughout implementation. Data collectors demonstrated a statistically significant (p = 0.0004) increase in accuracy throughout consecutive audits. The DQMIS was successful, despite an increase from 20 to 130 data collectors. In the absence of widely disseminated data quality methods and standards for large RCT interventions in limited-resource settings, we developed an integrated DQA system, combining auditing, rapid data feedback, and supportive supervision, which ensured high-quality data and could serve as a model for future health systems research trials. Future efforts should focus on standardization of DQA processes for health systems research. ClinicalTrials.gov identifier, NCT02148952 . Registered on 13 February 2014.

  2. When Are Mobile Phones Useful for Water Quality Data Collection? An Analysis of Data Flows and ICT Applications among Regulated Monitoring Institutions in Sub-Saharan Africa

    PubMed Central

    Kumpel, Emily; Peletz, Rachel; Bonham, Mateyo; Fay, Annette; Cock-Esteb, Alicea; Khush, Ranjiv

    2015-01-01

    Water quality monitoring is important for identifying public health risks and ensuring water safety. However, even when water sources are tested, many institutions struggle to access data for immediate action or long-term decision-making. We analyzed water testing structures among 26 regulated water suppliers and public health surveillance agencies across six African countries and identified four water quality data management typologies. Within each typology, we then analyzed the potential for information and communication technology (ICT) tools to facilitate water quality information flows. A consistent feature of all four typologies was that testing activities occurred in laboratories or offices, not at water sources; therefore, mobile phone-based data management may be most beneficial for institutions that collect data from multiple remote laboratories. We implemented a mobile phone application to facilitate water quality data collection within the national public health agency in Senegal, Service National de l’Hygiène. Our results indicate that using the phones to transmit more than just water quality data will likely improve the effectiveness and sustainability of this type of intervention. We conclude that an assessment of program structure, particularly its data flows, provides a sound starting point for understanding the extent to which ICTs might strengthen water quality monitoring efforts. PMID:26404343

  3. When Are Mobile Phones Useful for Water Quality Data Collection? An Analysis of Data Flows and ICT Applications among Regulated Monitoring Institutions in Sub-Saharan Africa.

    PubMed

    Kumpel, Emily; Peletz, Rachel; Bonham, Mateyo; Fay, Annette; Cock-Esteb, Alicea; Khush, Ranjiv

    2015-09-02

    Water quality monitoring is important for identifying public health risks and ensuring water safety. However, even when water sources are tested, many institutions struggle to access data for immediate action or long-term decision-making. We analyzed water testing structures among 26 regulated water suppliers and public health surveillance agencies across six African countries and identified four water quality data management typologies. Within each typology, we then analyzed the potential for information and communication technology (ICT) tools to facilitate water quality information flows. A consistent feature of all four typologies was that testing activities occurred in laboratories or offices, not at water sources; therefore, mobile phone-based data management may be most beneficial for institutions that collect data from multiple remote laboratories. We implemented a mobile phone application to facilitate water quality data collection within the national public health agency in Senegal, Service National de l'Hygiène. Our results indicate that using the phones to transmit more than just water quality data will likely improve the effectiveness and sustainability of this type of intervention. We conclude that an assessment of program structure, particularly its data flows, provides a sound starting point for understanding the extent to which ICTs might strengthen water quality monitoring efforts.

  4. Water quality data for national-scale aquatic research: The Water Quality Portal

    USGS Publications Warehouse

    Read, Emily K.; Carr, Lindsay; DeCicco, Laura; Dugan, Hilary; Hanson, Paul C.; Hart, Julia A.; Kreft, James; Read, Jordan S.; Winslow, Luke

    2017-01-01

    Aquatic systems are critical to food, security, and society. But, water data are collected by hundreds of research groups and organizations, many of which use nonstandard or inconsistent data descriptions and dissemination, and disparities across different types of water observation systems represent a major challenge for freshwater research. To address this issue, the Water Quality Portal (WQP) was developed by the U.S. Environmental Protection Agency, the U.S. Geological Survey, and the National Water Quality Monitoring Council to be a single point of access for water quality data dating back more than a century. The WQP is the largest standardized water quality data set available at the time of this writing, with more than 290 million records from more than 2.7 million sites in groundwater, inland, and coastal waters. The number of data contributors, data consumers, and third-party application developers making use of the WQP is growing rapidly. Here we introduce the WQP, including an overview of data, the standardized data model, and data access and services; and we describe challenges and opportunities associated with using WQP data. We also demonstrate through an example the value of the WQP data by characterizing seasonal variation in lake water clarity for regions of the continental U.S. The code used to access, download, analyze, and display these WQP data as shown in the figures is included as supporting information.

  5. Filtered Push: Annotating Distributed Data for Quality Control and Fitness for Use Analysis

    NASA Astrophysics Data System (ADS)

    Morris, P. J.; Kelly, M. A.; Lowery, D. B.; Macklin, J. A.; Morris, R. A.; Tremonte, D.; Wang, Z.

    2009-12-01

    The single greatest problem with the federation of scientific data is the assessment of the quality and validity of the aggregated data in the context of particular research problems, that is, its fitness for use. There are three critical data quality issues in networks of distributed natural science collections data, as in all scientific data: identifying and correcting errors, maintaining currency, and assessing fitness for use. To this end, we have designed and implemented a prototype network in the domain of natural science collections. This prototype is built over the open source Map-Reduce platform Hadoop with a network client in the open source collections management system Specify 6. We call this network “Filtered Push” as, at its core, annotations are pushed from the network edges to relevant authoritative repositories, where humans and software filter the annotations before accepting them as changes to the authoritative data. The Filtered Push software is a domain-neutral framework for originating, distributing, and analyzing record-level annotations. Network participants can subscribe to notifications arising from ontology-based analyses of new annotations or of purpose-built queries against the network's global history of annotations. Quality and fitness for use of distributed natural science collections data can be addressed with Filtered Push software by implementing a network that allows data providers and consumers to define potential errors in data, develop metrics for those errors, specify workflows to analyze distributed data to detect potential errors, and to close the quality management cycle by providing a network architecture to pushing assertions about data quality such as corrections back to the curators of the participating data sets. Quality issues in distributed scientific data have several things in common: (1) Statements about data quality should be regarded as hypotheses about inconsistencies between perhaps several records, data sets, or practices of science. (2) Data quality problems often cannot be detected only from internal statistical correlations or logical analysis, but may need the application of defined workflows that signal illogical output. (3) Changes in scientific theory or practice over time can result in changes of what QC tests should be applied to legacy data. (4) The frequency of some classes of error in a data set may be identifiable without the ability to assert that a particular record is in error. To address these issues requires, as does science itself, framing QC hypotheses against data that may be anywhere and may arise at any time in the future. In short, QC for science data is a never ending process. It must provide for notice to an agent (human or software) that a given dataset supports a hypothesis of inconsistency with a current scientific resource or model, or with potential generalizations of the concepts in a metadata ontology. Like quality control in general, quality control of distributed data is a repeated cyclical process. In implementing a Filtered Push network for quality control, we have a model in which the cost of QC forever is not substantially greater than QC once.

  6. The 3D Elevation Program: summary for Missouri

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  7. The 3D Elevation Program: summary for Montana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  8. The 3D Elevation Program: summary for Louisiana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  9. The 3D Elevation Program: summary for Tennessee

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  10. The 3D Elevation Program: summary for New York

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  11. 3D Elevation Program: summary for Vermont

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The 3D Elevation Program: summary for Maryland

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  13. The 3D Elevation Program: summary for Ohio

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features.

  14. The 3D Elevation Program: summary for Indiana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features.

  15. The 3D Elevation Program: summary for Maine

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  16. The 3D Elevation Program: summary for Kentucky

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  17. The 3D Elevation Program: summary for Oregon

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. The 3D Elevation Program: summary for North Dakota

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. The 3D Elevation Program: summary for Florida

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the OMB Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. 3D Elevation Program: summary for Nebraska

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  1. The 3D Elevation Program: summary for Alabama

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A-16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  2. 14 CFR 21.143 - Quality control data requirements; prime manufacturer.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Quality control data requirements; prime... describing assigned responsibilities and delegated authority of the quality control organization, together with a chart indicating the functional relationship of the quality control organization to management...

  3. 42 CFR 416.43 - Conditions for coverage-Quality assessment and performance improvement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... outcomes, patient safety, and quality of care. (2) Performance improvement activities must track adverse... improves patient safety by using quality indicators or performance measures associated with improved health... incorporate quality indicator data, including patient care and other relevant data regarding services...

  4. 42 CFR 416.43 - Conditions for coverage-Quality assessment and performance improvement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... outcomes, patient safety, and quality of care. (2) Performance improvement activities must track adverse... improves patient safety by using quality indicators or performance measures associated with improved health... incorporate quality indicator data, including patient care and other relevant data regarding services...

  5. 42 CFR 416.43 - Conditions for coverage-Quality assessment and performance improvement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... outcomes, patient safety, and quality of care. (2) Performance improvement activities must track adverse... improves patient safety by using quality indicators or performance measures associated with improved health... incorporate quality indicator data, including patient care and other relevant data regarding services...

  6. 42 CFR 480.144 - Access to QIO data and information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs... 42 Public Health 4 2012-10-01 2012-10-01 false Access to QIO data and information. 480.144 Section...

  7. 42 CFR 480.144 - Access to QIO data and information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION REVIEW INFORMATION Utilization and Quality Control Quality Improvement Organizations... 42 Public Health 4 2011-10-01 2011-10-01 false Access to QIO data and information. 480.144 Section...

  8. 42 CFR 480.144 - Access to QIO data and information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs... 42 Public Health 4 2013-10-01 2013-10-01 false Access to QIO data and information. 480.144 Section...

  9. 42 CFR 480.144 - Access to QIO data and information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS ACQUISITION, PROTECTION, AND DISCLOSURE OF QUALITY IMPROVEMENT ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs... 42 Public Health 4 2014-10-01 2014-10-01 false Access to QIO data and information. 480.144 Section...

  10. Quantifying the quality of precipitation data from different sources

    NASA Astrophysics Data System (ADS)

    Leijnse, Hidde; Wauben, Wiel; Overeem, Aart; de Haij, Marijn

    2015-04-01

    There is an increasing demand for high-resolution rainfall data. The current manual and automatic networks of climate and meteorological stations provide high quality rainfall data, but they cannot provide the high spatial and temporal resolution required for many applications. This can only partly be solved by using remotely sensed data. It is therefore necessary to consider third-party data, such as rain gauges operated by amateurs and rainfall intensities from commercial cellular communication links. The quality of such third-party data is highly variable and generally lower than that of dedicated networks. Often, such data quality information is missing for third party data. In order to be able to use data from various sources it is vital that quantitative knowledge of the data quality is available. This holds for all data sources, including the rain gauges in the reference networks of climate and meteorological stations. Data quality information is generally either not available or very limited for third-party data sources. For most dedicated climate meteorological networks, this information is only available for the sensor in laboratory conditions. In many cases, however, a significant part of the measurement errors and uncertainties is determined by the siting and maintenance of the sensor, for which generally only qualitative information is available. Furthermore sensors may have limitations under specific conditions. We aim to quantify data quality for different data sources by performing analyses on collocated data sets. Here we present an intercomparison of two years of precipitation data from six different sources (manual rain gauge, automatic rain gauge, present weather sensor, weather radar, commercial cellular communication links, and Meteosat) at three different locations in the Netherlands. We use auxiliary meteorological data to determine if the quality is influenced by other variables (e.g. the temperature influencing the evaporation from the rain gauge). We use three techniques to compare the data sets: 1) direct comparison; 2) triple collocation (see Stoffelen, 1998); and 3) comparison of statistics. Stoffelen, A. (1998). Toward the true near-surface wind speed: Error modeling and calibration using triple collocation. Journal of Geophysical Research: Oceans (1978-2012), 103(C4), 7755-7766.

  11. Quality aspects of the Wegener Center multi-satellite GPS radio occultation record OPSv5.6

    NASA Astrophysics Data System (ADS)

    Angerer, Barbara; Ladstädter, Florian; Scherllin-Pirscher, Barbara; Schwärz, Marc; Steiner, Andrea K.; Foelsche, Ulrich; Kirchengast, Gottfried

    2017-12-01

    The demand for high-quality atmospheric data records, which are applicable in climate studies, is undisputed. Using such records requires knowledge of the quality and the specific characteristics of all contained data sources. The latest version of the Wegener Center (WEGC) multi-satellite Global Positioning System (GPS) radio occultation (RO) record, OPSv5.6, provides globally distributed upper-air satellite data of high quality, usable for climate and other high-accuracy applications. The GPS RO technique has been deployed in several satellite missions since 2001. Consistency among data from these missions is essential to create a homogeneous long-term multi-satellite climate record. In order to enable a qualified usage of the WEGC OPSv5.6 data set we performed a detailed analysis of satellite-dependent quality aspects from 2001 to 2017. We present the impact of the OPSv5.6 quality control on the processed data and reveal time-dependent and satellite-specific quality characteristics. The highest quality data are found for MetOp (Meteorological Operational satellite) and GRACE (Gravity Recovery and Climate Experiment). Data from FORMOSAT-3/COSMIC (Formosa Satellite mission-3/Constellation Observing System for Meteorology, Ionosphere, and Climate) are also of high quality. However, comparatively large day-to-day variations and satellite-dependent irregularities need to be taken into account when using these data. We validate the consistency among the various satellite missions by calculating monthly mean temperature deviations from the multi-satellite mean, including a correction for the different sampling characteristics. The results are highly consistent in the altitude range from 8 to 25 km, with mean temperature deviations less than 0.1 K. At higher altitudes the OPSv5.6 RO temperature record is increasingly influenced by the characteristics of the bending angle initialization, with the amount of impact depending on the receiver quality.

  12. Plan–Provider Integration, Premiums, and Quality in the Medicare Advantage Market

    PubMed Central

    Frakt, Austin B; Pizer, Steven D; Feldman, Roger

    2013-01-01

    Objective. To investigate how integration between Medicare Advantage plans and health care providers is related to plan premiums and quality ratings. Data Source. We used public data from the Centers for Medicare and Medicaid Services (CMS) and the Area Resource File and private data from one large insurer. Premiums and quality ratings are from 2009 CMS administrative files and some control variables are historical. Study Design. We estimated ordinary least-squares models for premiums and plan quality ratings, with state fixed effects and firm random effects. The key independent variable was an indicator of plan–provider integration. Data Collection. With the exception of Medigap premium data, all data were publicly available. We ascertained plan–provider integration through examination of plans’ websites and governance documents. Principal Findings. We found that integrated plan–providers charge higher premiums, controlling for quality. Such plans also have higher quality ratings. We found no evidence that integration is associated with more generous benefits. Conclusions. Current policy encourages plan–provider integration, although potential effects on health insurance products and markets are uncertain. Policy makers and regulators may want to closely monitor changes in premiums and quality after integration and consider whether quality improvement (if any) justifies premium increases (if they occur). PMID:23800017

  13. Comparison of Water Years 2004-05 and Historical Water-Quality Data, Upper Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Spahr, Norman E.; Hartle, David M.; Diaz, Paul

    2008-01-01

    Population growth and changes in land use have the potential to affect water quality and quantity in the upper Gunnison River Basin. In 1995, the U.S. Geological Survey (USGS), in cooperation with the Bureau of Land Management, City of Gunnison, Colorado River Water Conservation District, Crested Butte South Metropolitan District, Gunnison County, Hinsdale County, Mount Crested Butte Water and Sanitation District, National Park Service, Town of Crested Butte, Upper Gunnison River Water Conservancy District, and Western State College, established a water-quality monitoring program in the upper Gunnison River Basin to characterize current water-quality conditions and to assess the effects of increased urban development and other land-use changes on water quality. The monitoring network has evolved into two groups of stations - stations that are considered long term and stations that are considered rotational. The long-term stations are monitored to assist in defining temporal changes in water quality (how conditions may change over time). The rotational stations are monitored to assist in the spatial definition of water-quality conditions (how conditions differ throughout the basin) and to address local and short-term concerns. Some stations in the rotational group were changed beginning in water year 2007. Annual summaries of the water-quality data from the monitoring network provide a point of reference for discussions regarding water-quality monitoring in the upper Gunnison River Basin. This summary includes data collected during water years 2004 and 2005. The introduction provides a map of the sampling sites, definitions of terms, and a one-page summary of selected water-quality conditions at the network stations. The remainder of the summary is organized around the data collected at individual stations. Data collected during water years 2004 and 2005 are compared to historical data, State water-quality standards, and Federal water-quality guidelines. Data were collected following USGS protocols.

  14. Comparison of 2006-2007 Water Years and Historical Water-Quality Data, Upper Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Solberg, P.A.; Moore, Bryan; Smits, Dennis

    2009-01-01

    Population growth and changes in land use have the potential to affect water quality and quantity in the upper Gunnison River basin. In 1995, the U.S. Geological Survey (USGS), in cooperation with the Bureau of Land Management, City of Gunnison, Colorado River Water Conservation District, Crested Butte South Metropolitan District, Gunnison County, Hinsdale County, Mount Crested Butte Water and Sanitation District, National Park Service, Town of Crested Butte, Upper Gunnison River Water Conservancy District, and Western State College established a water-quality monitoring program in the upper Gunnison River basin to characterize current water-quality conditions and to assess the effects of increased urban development and other land-use changes on water quality. The monitoring network has evolved into two groups of stations - stations that are considered long term and stations that are considered rotational. The long-term stations are monitored to assist in defining temporal changes in water quality (how conditions may change over time). The rotational stations are monitored to assist in the spatial definition of water-quality conditions (how conditions differ throughout the basin) and to address local and short-term concerns. Some stations in the rotational group were changed beginning in water year 2007. Annual summaries of the water-quality data from the monitoring network provide a point of reference for discussions regarding water-quality monitoring in the upper Gunnison River basin. This summary includes data collected during water years 2006 and 2007. The introduction provides a map of the sampling sites, definitions of terms, and a one-page summary of selected water-quality conditions at the network stations. The remainder of the summary is organized around the data collected at individual stations. Data collected during water years 2006 and 2007 are compared to historical data, State water-quality standards, and Federal water-quality guidelines. Data were collected following USGS protocols (U.S. Geological Survey, variously dated).

  15. Assessment of Water-Quality Monitoring and a Proposed Water-Quality Monitoring Network for the Mosquito Lagoon Basin, East-Central Florida

    USGS Publications Warehouse

    Kroening, Sharon E.

    2008-01-01

    Surface- and ground-water quality data from the Mosquito Lagoon Basin were compiled and analyzed to: (1) describe historical and current monitoring in the basin, (2) summarize surface- and ground-water quality conditions with an emphasis on identifying areas that require additional monitoring, and (3) develop a water-quality monitoring network to meet the goals of Canaveral National Seashore (a National Park) and to fill gaps in current monitoring. Water-quality data were compiled from the U.S. Environmental Protection Agency's STORET system, the U.S. Geological Survey's National Water Information System, or from the agency which collected the data. Most water-quality monitoring focused on assessing conditions in Mosquito Lagoon. Significant spatial and/or seasonal variations in water-quality constituents in the lagoon were quantified for pH values, fecal coliform bacteria counts, and concentrations of dissolved oxygen, total nitrogen, total phosphorus, chlorophyll-a, and total suspended solids. Trace element, pesticide, and ground-water-quality data were more limited. Organochlorine insecticides were the major class of pesticides analyzed. A surface- and ground-water-quality monitoring network was designed for the Mosquito Lagoon Basin which emphasizes: (1) analysis of compounds indicative of human activities, including pesticides and other trace organic compounds present in domestic and industrial waste; (2) greater data collection in the southern part of Mosquito Lagoon where spatial variations in water-quality constituents were quantified; and (3) additional ground-water-quality data collection in the surficial aquifer system and Upper Floridan aquifer. Surface-water-quality data collected as part of this network would include a fixed-station monitoring network of eight sites in the southern part of the basin, including a canal draining Oak Hill. Ground-water quality monitoring should be done routinely at about 20 wells in the surficial aquifer system and Upper Floridan aquifer, distributed between developed and undeveloped parts of the basin. Water samples collected should be analyzed for a wide range of constituents, including physical properties, nutrients, suspended sediment, and constituents associated with increased urban development such as pesticides, other trace organic compounds associated with domestic and industrial waste, and trace elements.

  16. Impact of Requirements Quality on Project Success or Failure

    NASA Astrophysics Data System (ADS)

    Tamai, Tetsuo; Kamata, Mayumi Itakura

    We are interested in the relationship between the quality of the requirements specifications for software projects and the subsequent outcome of the projects. To examine this relationship, we investigated 32 projects started and completed between 2003 and 2005 by the software development division of a large company in Tokyo. The company has collected reliable data on requirements specification quality, as evaluated by software quality assurance teams, and overall project performance data relating to cost and time overruns. The data for requirements specification quality were first converted into a multiple-dimensional space, with each dimension corresponding to an item of the recommended structure for software requirements specifications (SRS) defined in IEEE Std. 830-1998. We applied various statistical analysis methods to the SRS quality data and project outcomes.

  17. 40 CFR 63.7535 - Is there a minimum amount of monitoring data I must obtain?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-control periods, or required monitoring system quality assurance or control activities in data averages... required monitoring system quality assurance or quality control activities (including, as applicable... control activities. You must calculate monitoring results using all other monitoring data collected while...

  18. 40 CFR 63.7535 - Is there a minimum amount of monitoring data I must obtain?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-control periods, or required monitoring system quality assurance or control activities in data averages... required monitoring system quality assurance or quality control activities (including, as applicable... control activities. You must calculate monitoring results using all other monitoring data collected while...

  19. Water resources data for Texas, water year 1993. Volume 4. Ground-water data. Water-data report (Annual), 1 October 1992-30 September 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gandara, S.C.; Jones, R.E.

    1993-11-01

    Water-resources data for the 1993 water year for Texas consists of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in wells. Volume 4 contains water levels for 771 observation wells and water-quality data for 226 monitoring wells.

  20. Water resources data for Texas, water year 1996. Volume 4. Ground-water data. Water-data report (Annual), 1 October 1995-30 September 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gandara, S.C.; Jones, R.E.; Barbie, D.L.

    1996-11-22

    Water-resources data for the 1996 water year for Texas consists of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in wells. Volume 4 contains water levels for 845 observation wells and 187 water-quality data for monitoring wells.

  1. Water resources data for Texas, water year 1994. Volume 4. Ground-water data. Water-data report (Annual), 1 October 1993-30 September 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gandara, S.C.; Jones, R.E.

    1994-12-12

    Water-resources data for the 1994 water year for Texas consists of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in wells. Volume 4 contains water levels for 698 observation wells and water-quality data for 97 monitoring wells.

  2. Water resources data for Texas, water year 1997. Volume 4. Ground-water data. Water-data report (Annual), 1 October 1996-30 September 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gandara, S.C.; Jones, R.E.; Barbie, D.L.

    1997-12-03

    Water-resources data for the 1997 water year for Texas consists of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in wells. Volume 4 contains water levels for 790 observation wells and 245 water-quality data for monitoring wells.

  3. Water resources data for Texas, water year 1995. Volume 4. Ground-water data. Water-data report (Annual), 1 October 1994-30 September 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gandara, S.C.; Jones, R.E.

    1995-12-18

    Water-resources data for the 1995 water year for Texas consists of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in wells. Volume 4 contains water levels for 919 observation wells and 226 water-quality data for monitoring wells.

  4. An Interoperable Architecture for Air Pollution Early Warning System Based on Sensor Web

    NASA Astrophysics Data System (ADS)

    Samadzadegan, F.; Zahmatkesh, H.; Saber, M.; Ghazi khanlou, H. J.

    2013-09-01

    Environmental monitoring systems deal with time-sensitive issues which require quick responses in emergency situations. Handling the sensor observations in near real-time and obtaining valuable information is challenging issues in these systems from a technical and scientific point of view. The ever-increasing population growth in urban areas has caused certain problems in developing countries, which has direct or indirect impact on human life. One of applicable solution for controlling and managing air quality by considering real time and update air quality information gathered by spatially distributed sensors in mega cities, using sensor web technology for developing monitoring and early warning systems. Urban air quality monitoring systems using functionalities of geospatial information system as a platform for analysing, processing, and visualization of data in combination with Sensor Web for supporting decision support systems in disaster management and emergency situations. This system uses Sensor Web Enablement (SWE) framework of the Open Geospatial Consortium (OGC), which offers a standard framework that allows the integration of sensors and sensor data into spatial data infrastructures. SWE framework introduces standards for services to access sensor data and discover events from sensor data streams as well as definition set of standards for the description of sensors and the encoding of measurements. The presented system provides capabilities to collect, transfer, share, process air quality sensor data and disseminate air quality status in real-time. It is possible to overcome interoperability challenges by using standard framework. In a routine scenario, air quality data measured by in-situ sensors are communicated to central station where data is analysed and processed. The extracted air quality status is processed for discovering emergency situations, and if necessary air quality reports are sent to the authorities. This research proposed an architecture to represent how integrate air quality sensor data stream into geospatial data infrastructure to present an interoperable air quality monitoring system for supporting disaster management systems by real time information. Developed system tested on Tehran air pollution sensors for calculating Air Quality Index (AQI) for CO pollutant and subsequently notifying registered users in emergency cases by sending warning E-mails. Air quality monitoring portal used to retrieving and visualize sensor observation through interoperable framework. This system provides capabilities to retrieve SOS observation using WPS in a cascaded service chaining pattern for monitoring trend of timely sensor observation.

  5. Evaluation of well-purging effects on water-quality results for samples collected from the eastern Snake River Plain aquifer underlying the Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Knobel, LeRoy L.

    2006-01-01

    This report presents qualitative and quantitative comparisons of water-quality data from the Idaho National Laboratory, Idaho, to determine if the change from purging three wellbore volumes to one wellbore volume has a discernible effect on the comparability of the data. Historical water-quality data for 30 wells were visually compared to water-quality data collected after purging only 1 wellbore volume from the same wells. Of the 322 qualitatively examined constituent plots, 97.5 percent met 1 or more of the criteria established for determining data comparability. A simple statistical equation to determine if water-quality data collected from 28 wells at the INL with long purge times (after pumping 1 and 3 wellbore volumes of water) were statistically the same at the 95-percent confidence level indicated that 97.9 percent of 379 constituent pairs were equivalent. Comparability of water-quality data determined from both the qualitative (97.5 percent comparable) and quantitative (97.9 percent comparable) evaluations after purging 1 and 3 wellbore volumes of water indicates that the change from purging 3 to 1 wellbore volumes had no discernible effect on comparability of water-quality data at the INL. However, the qualitative evaluation was limited because only October-November 2003 data were available for comparison to historical data. This report was prepared by the U.S. Geological Survey in cooperation with the U.S. Department of Energy.

  6. Quality-assurance data for routine water quality analyses by the U. S. Geological Survey laboratory in Troy, New York; July 1993 through June 1995

    USGS Publications Warehouse

    Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.

    2001-01-01

    A laboratory for analysis of low-ionic strength water has been developed at the U.S. Geological Survey (USGS) office in Troy, N.Y., to analyze samples collected by USGS projects in the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures developed to ensure proper sample collection, processing, and analysis. The quality-assurance/quality-control data are stored in the laboratory's SAS data-management system, which provides efficient review, compilation, and plotting of quality-assurance/quality-control data. This report presents and discusses samples analyzed from July 1993 through June 1995. Quality-control results for 18 analytical procedures were evaluated for bias and precision. Control charts show that data from seven of the analytical procedures were biased throughout the analysis period for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, dissolved inorganic carbon, dissolved organic carbon (soil expulsions), chloride, magnesium, nitrate (colorimetric method), and pH. Three of the analytical procedures were occasionally biased but were within control limits; they were: calcium (high for high-concentration samples for May 1995), dissolved organic carbon (high for highconcentration samples from January through September 1994), and fluoride (high in samples for April and June 1994). No quality-control sample has been developed for the organic monomeric aluminum procedure. Results from the filter-blank and analytical-blank analyses indicate that all analytical procedures in which blanks were run were within control limits, although values for a few blanks were outside the control limits. Blanks were not analyzed for acid-neutralizing capacity, dissolved inorganic carbon, fluoride, nitrate (colorimetric method), or pH. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in 14 of the 18 procedures. Data-quality objectives were met by more than 90 percent of the samples analyzed in all procedures except total monomeric aluminum (85 percent of samples met objectives), total aluminum (70 percent of samples met objectives), and dissolved organic carbon (85 percent of samples met objectives). Triplicate samples were not analyzed for ammonium, fluoride, dissolved inorganic carbon, or nitrate (colorimetric method). Results of the USGS interlaboratory Standard Reference Sample Program indicated high data quality with a median result of 3.6 of a possible 4.0. Environment Canada's LRTAP interlaboratory study results indicated that more than 85 percent of the samples met data-quality objectives in 6 of the 12 analyses; exceptions were calcium, dissolved organic carbon, chloride, pH, potassium, and sodium. Data-quality objectives were not met for calcium samples in one LRTAP study, but 94 percent of samples analyzed were within control limits for the remaining studies. Data-quality objectives were not met by 35 percent of samples analyzed for dissolved organic carbon, but 94 percent of sample values were within 20 percent of the most probable value. Data-quality objectives were not met for 30 percent of samples analyzed for chloride, but 90 percent of sample values were within 20 percent of the most probable value. Measurements of samples with a pH above 6.0 were biased high in 54 percent of the samples, although 85 percent of the samples met data-quality objectives for pH measurements below 6.0. Data-quality objectives for potassium and sodium were not met in one study (only 33 percent of the samples analyzed met the objectives), although 85 percent of the sample values were within control limits for the other studies. Measured sodium values were above the upper control limit in all studies. Results from blind reference-sample analyses indicated that data

  7. Evaluating the Quality and Usability of Open Data for Public Health Research: A Systematic Review of Data Offerings on 3 Open Data Platforms.

    PubMed

    Martin, Erika G; Law, Jennie; Ran, Weijia; Helbig, Natalie; Birkhead, Guthrie S

    Government datasets are newly available on open data platforms that are publicly accessible, available in nonproprietary formats, free of charge, and with unlimited use and distribution rights. They provide opportunities for health research, but their quality and usability are unknown. To describe available open health data, identify whether data are presented in a way that is aligned with best practices and usable for researchers, and examine differences across platforms. Two reviewers systematically reviewed a random sample of data offerings on NYC OpenData (New York City, all offerings, n = 37), Health Data NY (New York State, 25% sample, n = 71), and HealthData.gov (US Department of Health and Human Services, 5% sample, n = 75), using a standard coding guide. Three open health data platforms at the federal, New York State, and New York City levels. Data characteristics from the coding guide were aggregated into summary indices for intrinsic data quality, contextual data quality, adherence to the Dublin Core metadata standards, and the 5-star open data deployment scheme. One quarter of the offerings were structured datasets; other presentation styles included charts (14.7%), documents describing data (12.0%), maps (10.9%), and query tools (7.7%). Health Data NY had higher intrinsic data quality (P < .001), contextual data quality (P < .001), and Dublin Core metadata standards adherence (P < .001). All met basic "web availability" open data standards; fewer met higher standards of "hyperlinked to other data." Although all platforms need improvement, they already provide readily available data for health research. Sustained effort on improving open data websites and metadata is necessary for ensuring researchers use these data, thereby increasing their research value.

  8. Earth Observation Data Quality Monitoring and Control: A Case Study of STAR Central Data Repository

    NASA Astrophysics Data System (ADS)

    Han, W.; Jochum, M.

    2017-12-01

    Earth observation data quality is very important for researchers and decision makers involved in weather forecasting, severe weather warning, disaster and emergency response, environmental monitoring, etc. Monitoring and control earth observation data quality, especially accuracy, completeness, and timeliness, is very useful in data management and governance to optimize data flow, discover potential transmission issues, and better connect data providers and users. Taking a centralized near real-time satellite data repository, STAR (Center for Satellite Applications and Research of NOAA) Central Data Repository (SCDR), as an example, this paper describes how to develop new mechanism to verify data integrity, check data completeness, and monitor data latency in an operational data management system. Such quality monitoring and control of large volume satellite data help data providers and managers improve data transmission of near real-time satellite data, enhance its acquisition and management, and overcome performance and management issues to better serve research and development activities.

  9. Compression of next-generation sequencing quality scores using memetic algorithm

    PubMed Central

    2014-01-01

    Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747

  10. Water resources data, North Carolina, water year 2002. Volume 1B: Surface-water records

    USGS Publications Warehouse

    Ragland, B.C.; Barker, R.G.; Robinson, J.B.

    2003-01-01

    Water-resources data for the 2002 water year for North Carolina consist of records of stage, discharge, water quality for streams; stage and contents for lakes and reservoirs; precipitation; and ground-water levels and water quality of ground water. Volume 1 contains discharge records for 211 gaging stations; stage and contents for 62 lakes and reservoirs; stage for 20 gaging stations; water quality for 52 gaging stations and 7 miscellaneous sites, and continuous water quality for 30 sites; and continuous precipitation at 109 sites. Volume 2 contains ground-water-level data from 143 observation wells and ground-water-quality data from 72 wells. Additional water data were collected at 85 sites not involved in the systematic data-collection program, and are published as miscellaneous measurements in Volume 1. The collection of water-resources data in North Carolina is a part of the National Water-Data System operated by the U.S. Geological Survey in cooperation with State, municipal, and Federal agencies.

  11. Method and apparatus for in-process sensing of manufacturing quality

    DOEpatents

    Hartman, Daniel A [Santa Fe, NM; Dave, Vivek R [Los Alamos, NM; Cola, Mark J [Santa Fe, NM; Carpenter, Robert W [Los Alamos, NM

    2005-02-22

    A method for determining the quality of an examined weld joint comprising the steps of providing acoustical data from the examined weld joint, and performing a neural network operation on the acoustical data determine the quality of the examined weld joint produced by a friction weld process. The neural network may be trained by the steps of providing acoustical data and observable data from at least one test weld joint, and training the neural network based on the acoustical data and observable data to form a trained neural network so that the trained neural network is capable of determining the quality of a examined weld joint based on acoustical data from the examined weld joint. In addition, an apparatus having a housing, acoustical sensors mounted therein, and means for mounting the housing on a friction weld device so that the acoustical sensors do not contact the weld joint. The apparatus may sample the acoustical data necessary for the neural network to determine the quality of a weld joint.

  12. How hospitals view unit-level nurse turnover data collection: analysis of a hospital survey.

    PubMed

    Park, Shin Hye; Boyle, Diane K

    2015-02-01

    The objectives of this study were to examine the quality of unit-level nurse turnover data collection among the National Database of Nursing Quality Indicators hospitals and to identify the burdens of collecting such data. Tracking and managing nurse turnover at the unit level are critical for administrators who determine managerial strategies. Little is known about the quality of and burdens of unit-level turnover data collection. Surveys from 178 hospitals were analyzed descriptively. Most hospitals strongly agreed or agreed with the quality of unit-level turnover data collection. Hospitals identified the burdens of additional time and resources needed for unit-level turnover data collection and the difficulty of obtaining specific reasons for turnover. Collecting unit-level nurse turnover data can be important and useful for administrators to improve nurse retention, workforce stability, and quality of care. We suggest that the advantages of unit-level nurse turnover data and reports can overcome the identified burdens.

  13. Method and Apparatus for In-Process Sensing of Manufacturing Quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, D.A.; Dave, V.R.; Cola, M.J.

    2005-02-22

    A method for determining the quality of an examined weld joint comprising the steps of providing acoustical data from the examined weld joint, and performing a neural network operation on the acoustical data determine the quality of the examined weld joint produced by a friction weld process. The neural network may be trained by the steps of providing acoustical data and observable data from at least one test weld joint, and training the neural network based on the acoustical data and observable data to form a trained neural network so that the trained neural network is capable of determining themore » quality of a examined weld joint based on acoustical data from the examined weld joint. In addition, an apparatus having a housing, acoustical sensors mounted therein, and means for mounting the housing on a friction weld device so that the acoustical sensors do not contact the weld joint. The apparatus may sample the acoustical data necessary for the neural network to determine the quality of a weld joint.« less

  14. Hydrologic data for Block Island, Rhode Island

    USGS Publications Warehouse

    Burns, Emily

    1993-01-01

    This report was compiled as part of a study to assess the hydrogeology and the quality and quantity of fresh ground water on Block Island, Rhode Island. Hydrologic data were collected on Block Island during 1988-91. The data are pre- sented in illustrations and tables. Data collec- ted include precipitation, surfae-water, ground- water, lithologic, and well-construction and dis- charge information. Precipitation data include total monthly precipitation values from 11 rain gages and water-quality analyses of 14 precipi- tation samples from one station. Surface-water data include water-level measurements at 12 ponds, water-quality data for five ponds, and field specific-conductance measurements at 56 surface- water sites (streams, ponds, and springs). Ground- water data include water-level measurements at 159 wells, water-quality data at 150 wells, and field specific-conductance data at 52 wells. Lithologic logs for 375 wells and test borings, and construc- tion and location data for 570 wells, springs, and test borings are included. In addition, the data set contains data on water quality of water samples, collected by the Rhode Island Department of Health during 1976-91, from Fresh and Sands Ponds and from wells at the Block Island Water Company well field north of Sands Pond.

  15. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally deployed DataONE quality service can achieve major efficiency gains by allowing member repositories to customize and use recommendations that fit their specific needs without having to create de novo infrastructure at their site.

  16. CIDR

    Science.gov Websites

    Statistics Quality Control Statistics CIDR is dedicated to producing the highest quality data for our investigators. These cumulative quality control statistics are based on data from 419 released CIDR Program

  17. Generation and use of observational data patterns in the evaluation of data quality for AmeriFlux and FLUXNET

    NASA Astrophysics Data System (ADS)

    Pastorello, G.; Agarwal, D.; Poindexter, C.; Papale, D.; Trotta, C.; Ribeca, A.; Canfora, E.; Faybishenko, B.; Gunter, D.; Chu, H.

    2015-12-01

    The fluxes-measuring sites that are part of AmeriFlux are operated and maintained in a fairly independent fashion, both in terms of scientific goals and operational practices. This is also the case for most sites from other networks in FLUXNET. This independence leads to a degree of heterogeneity in the data sets collected at the sites, which is also reflected in data quality levels. The generation of derived data products and data synthesis efforts, two of the main goals of these networks, are directly affected by the heterogeneity in data quality. In a collaborative effort between AmeriFlux and ICOS, a series of quality checks are being conducted for the data sets before any network-level data processing and product generation take place. From these checks, a set of common data issues were identified, and are being cataloged and classified into data quality patterns. These patterns are now being used as a basis for implementing automation for certain data quality checks, speeding up the process of applying the checks and evaluating the data. Currently, most data checks are performed individually in each data set, requiring visual inspection and inputs from a data curator. This manual process makes it difficult to scale the quality checks, creating a bottleneck for the data processing. One goal of the automated checks is to free up time of data curators so they can focus on new or less common issues. As new issues are identified, they can also be cataloged and classified, extending the coverage of existing patterns or potentially generating new patterns, helping both improve existing automated checks and create new ones. This approach is helping make data quality evaluation faster, more systematic, and reproducible. Furthermore, these patterns are also helping with documenting common causes and solutions for data problems. This can help tower teams with diagnosing problems in data collection and processing, and also in correcting historical data sets. In this presentation, using AmeriFlux fluxes and micrometeorological data, we discuss our approach to creating observational data patterns, and how we are using them to implement new automated checks. We also detail examples of these observational data patterns, illustrating how they are being used.

  18. GeoViQua: quality-aware geospatial data discovery and evaluation

    NASA Astrophysics Data System (ADS)

    Bigagli, L.; Papeschi, F.; Mazzetti, P.; Nativi, S.

    2012-04-01

    GeoViQua (QUAlity aware VIsualization for the Global Earth Observation System of Systems) is a recently started FP7 project aiming at complementing the Global Earth Observation System of Systems (GEOSS) with rigorous data quality specifications and quality-aware capabilities, in order to improve reliability in scientific studies and policy decision-making. GeoViQua main scientific and technical objective is to enhance the GEOSS Common Infrastructure (GCI) providing the user community with innovative quality-aware search and evaluation tools, which will be integrated in the GEO-Portal, as well as made available to other end-user interfaces. To this end, GeoViQua will promote the extension of the current standard metadata for geographic information with accurate and expressive quality indicators, also contributing to the definition of a quality label (GEOLabel). GeoViQua proposed solutions will be assessed in several pilot case studies covering the whole Earth Observation chain, from remote sensing acquisition to data processing, to applications in the main GEOSS Societal Benefit Areas. This work presents the preliminary results of GeoViQua Work Package 4 "Enhanced geo-search tools" (WP4), started in January 2012. Its major anticipated technical innovations are search and evaluation tools that communicate and exploit data quality information from the GCI. In particular, GeoViQua will investigate a graphical search interface featuring a coherent and meaningful aggregation of statistics and metadata summaries (e.g. in the form of tables, charts), thus enabling end users to leverage quality constraints for data discovery and evaluation. Preparatory work on WP4 requirements indicated that users need the "best" data for their purpose, implying a high degree of subjectivity in judgment. This suggests that the GeoViQua system should exploit a combination of provider-generated metadata (objective indicators such as summary statistics), system-generated metadata (contextual/tracking information such as provenance of data and metadata), and user-generated metadata (informal user comments, usage information, rating, etc.). Moreover, metadata should include sufficiently complete access information, to allow rich data visualization and propagation. The following main enabling components are currently identified within WP4: - Quality-aware access services, e.g. a quality-aware extension of the OGC Sensor Observation Service (SOS-Q) specification, to support quality constraints for sensor data publishing and access; - Quality-aware discovery services, namely a quality-aware extension of the OGC Catalog Service for the Web (CSW-Q), to cope with quality constrained search; - Quality-augmentation broker (GeoViQua Broker), to support the linking and combination of the existing GCI metadata with GeoViQua- and user-generated metadata required to support the users in selecting the "best" data for their intended use. We are currently developing prototypes of the above quality-enabled geo-search components, that will be assessed in a sensor-based pilot case study in the next months. In particular, the GeoViQua Broker will be integrated with the EuroGEOSS Broker, to implement CSW-Q and federate (either via distribution or harvesting schemes) quality-aware data sources, GeoViQua will constitute a valuable test-bed for advancing the current best practices and standards in geospatial quality representation and exploitation. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 265178.

  19. A service for the application of data quality information to NASA earth science satellite records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Xing, Z.; Fry, C.; Khalsa, S. J. S.; Huang, T.; Chen, G.; Chin, T. M.; Alarcon, C.

    2016-12-01

    A recurring demand in working with satellite-based earth science data records is the need to apply data quality information. Such quality information is often contained within the data files as an array of "flags", but can also be represented by more complex quality descriptions such as combinations of bit flags, or even other ancillary variables that can be applied as thresholds to the geophysical variable of interest. For example, with Level 2 granules from the Group for High Resolution Sea Surface Temperature (GHRSST) project up to 6 independent variables could be used to screen the sea surface temperature measurements on a pixel-by-pixel basis. Quality screening of Level 3 data from the Soil Moisture Active Passive (SMAP) instrument can be become even more complex, involving 161 unique bit states or conditions a user can screen for. The application of quality information is often a laborious process for the user until they understand the implications of all the flags and bit conditions, and requires iterative approaches using custom software. The Virtual Quality Screening Service, a NASA ACCESS project, is addressing these issues and concerns. The project has developed an infrastructure to expose, apply, and extract quality screening information building off known and proven NASA components for data extraction and subset-by-value, data discovery, and exposure to the user of granule-based quality information. Further sharing of results through well-defined URLs and web service specifications has also been implemented. The presentation will focus on overall description of the technologies and informatics principals employed by the project. Examples of implementations of the end-to-end web service for quality screening with GHRSST and SMAP granules will be demonstrated.

  20. Harnessing the power of enhanced data for healthcare quality improvement: lessons from a Minnesota Hospital Association Pilot Project.

    PubMed

    Pine, Michael; Sonneborn, Mark; Schindler, Joe; Stanek, Michael; Maeda, Jared Lane; Hanlon, Carrie

    2012-01-01

    The imperative to achieve quality improvement and cost-containment goals is driving healthcare organizations to make better use of existing health information. One strategy, the construction of hybrid data sets combining clinical and administrative data, has strong potential to improve the cost-effectiveness of hospital quality reporting processes, improve the accuracy of quality measures and rankings, and strengthen data systems. Through a two-year contract with the Agency for Healthcare Research and Quality, the Minnesota Hospital Association launched a pilot project in 2007 to link hospital clinical information to administrative data. Despite some initial challenges, this project was successful. Results showed that the use of hybrid data allowed for more accurate comparisons of risk-adjusted mortality and risk-adjusted complications across Minnesota hospitals. These increases in accuracy represent an important step toward targeting quality improvement efforts in Minnesota and provide important lessons that are being leveraged through ongoing projects to construct additional enhanced data sets. We explore the implementation challenges experienced during the Minnesota Pilot Project and their implications for hospitals pursuing similar data-enhancement projects. We also highlight the key lessons learned from the pilot project's success.

  1. Evaluating the Reliability and Impact of a Quality Assurance System for E-Learning Courseware

    ERIC Educational Resources Information Center

    Sung, Yao-Ting; Chang, Kuo-En; Yu, Wen-Cheng

    2011-01-01

    Assuring e-learning quality is of interest worldwide. This paper introduces the methods of e-learning courseware quality assurance (a quality certification system) adopted by the eLQSC (e-Learning Quality Service Centre) in Taiwan. A sequential/explanatory design with a mixed methodology was used to gather research data and conduct data analyses.…

  2. The "I" in QRIS Survey: Collecting Data on Quality Improvement Activities for Early Childhood Education Programs. REL 2017-221

    ERIC Educational Resources Information Center

    Faria, Ann-Marie; Hawkinson, Laura; Metzger, Ivan; Bouacha, Nora; Cantave, Michelle

    2017-01-01

    A quality rating and improvement system (QRIS) is a voluntary state assessment system that uses multidimensional data on early childhood education programs to rate program quality, support quality improvement efforts, and provide information to families about the quality of available early childhood education programs. QRISs have two components:…

  3. Managing data quality in an existing medical data warehouse using business intelligence technologies.

    PubMed

    Eaton, Scott; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti

    2008-11-06

    The Ohio State University Medical Center (OSUMC) Information Warehouse (IW) is a comprehensive data warehousing facility that provides providing data integration, management, mining, training, and development services to a diversity of customers across the clinical, education, and research sectors of the OSUMC. Providing accurate and complete data is a must for these purposes. In order to monitor the data quality of targeted data sets, an online scorecard has been developed to allow visualization of the critical measures of data quality in the Information Warehouse.

  4. Seismic Data Archive Quality Assurance -- Analytics Adding Value at Scale

    NASA Astrophysics Data System (ADS)

    Casey, R. E.; Ahern, T. K.; Sharer, G.; Templeton, M. E.; Weertman, B.; Keyson, L.

    2015-12-01

    Since the emergence of real-time delivery of seismic data over the last two decades, solutions for near-real-time quality analysis and station monitoring have been developed by data producers and data stewards. This has allowed for a nearly constant awareness of the quality of the incoming data and the general health of the instrumentation around the time of data capture. Modern quality assurance systems are evolving to provide ready access to a large variety of metrics, a rich and self-correcting history of measurements, and more importantly the ability to access these quality measurements en-masse through a programmatic interface.The MUSTANG project at the IRIS Data Management Center is working to achieve 'total archival data quality', where a large number of standardized metrics, some computationally expensive, are generated and stored for all data from decades past to the near present. To perform this on a 300 TB archive of compressed time series requires considerable resources in network I/O, disk storage, and CPU capacity to achieve scalability, not to mention the technical expertise to develop and maintain it. In addition, staff scientists are necessary to develop the system metrics and employ them to produce comprehensive and timely data quality reports to assist seismic network operators in maintaining their instrumentation. All of these metrics must be available to the scientist 24/7.We will present an overview of the MUSTANG architecture including the development of its standardized metrics code in R. We will show examples of the metrics values that we make publicly available to scientists and educators and show how we are sharing the algorithms used. We will also discuss the development of a capability that will enable scientific researchers to specify data quality constraints on their requests for data, providing only the data that is best suited to their area of study.

  5. EMPIRICAL RISK RELATIONSHIPS BETWEEN INVERTEBRATES, HABITAT AND WATER QUALITY IN MAIA DATA SETS

    EPA Science Inventory

    A technique for developing a non-weighted risk index, originally developed for use with Ohio fish assemblage data, was applied to invertebrate, habitat and water quality data collected from Mid-Atlantic streams of the U.S. during 1997-98. Multiple habitat and water quality varia...

  6. Communicating Instantaneous Air Quality Data: Pilot Project Feed Back

    EPA Pesticide Factsheets

    EPA is launching a pilot project to test a new tool for making instantaneous outdoor air quality data useful for the public. The new “sensor scale” is designed to be used with air quality sensors that provide data in short time increments – often as little

  7. Solar Resource & Meteorological Assessment Project (SOLRMAP): Rotating Shadowband Radiometer (RSR); Kalaeloa Oahu, Hawaii (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2010-03-16

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  8. Solar Resource & Meteorological Assessment Project (SOLRMAP): Rotating Shadowband Radiometer (RSR); Los Angeles, California (Data)

    DOE Data Explorer

    Stoffel, T.; Andreas, A.

    2010-04-26

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  9. Solar Resource & Meteorological Assessment Project (SOLRMAP): Rotating Shadowband Radiometer (RSR); Cedar City, Utah (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2010-07-13

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  10. Solar Resource and Meteorological Assessment Project (SOLRMAP): Rotating Shadowband Radiometer (RSR); Escalante Tri-State - Prewitt, New Mexico (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2012-11-03

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  11. Solar Resource & Meteorological Assessment Project (SOLRMAP): Sun Spot Two; Swink, Colorado (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2010-11-10

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  12. Solar Resource & Meteorological Assessment Project (SOLRMAP): Rotating Shadowband Radiometer (RSR); Milford, Utah (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2010-07-14

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  13. Solar Resource & Meteorological Assessment Project (SOLRMAP): Rotating Shadowband Radiometer (RSR); La Ola Lanai, Hawaii (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2009-07-22

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  14. Solar Resource & Meteorological Assessment Project (SOLRMAP): Observed Atmospheric and Solar Information System (OASIS); Tucson, Arizona (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2010-11-03

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  15. Quality control and quality assurance plan for bridge channel-stability assessments in Massachusetts

    USGS Publications Warehouse

    Parker, Gene W.; Pinson, Harlow

    1993-01-01

    A quality control and quality assurance plan has been implemented as part of the Massachusetts bridge scour and channel-stability assessment program. This program is being conducted by the U.S. Geological Survey, Massachusetts-Rhode Island District, in cooperation with the Massachusetts Highway Department. Project personnel training, data-integrity verification, and new data-management technologies are being utilized in the channel-stability assessment process to improve current data-collection and management techniques. An automated data-collection procedure has been implemented to standardize channel-stability assessments on a regular basis within the State. An object-oriented data structure and new image management tools are used to produce a data base enabling management of multiple data object classes. Data will be reviewed by assessors and data base managers before being merged into a master bridge-scour data base, which includes automated data-verification routines.

  16. Improving Data Quality in Mass-Gatherings Health Research.

    PubMed

    Guy, Andrew; Prager, Ross; Turris, Sheila; Lund, Adam

    2017-06-01

    Mass gatherings attract large crowds and can strain the planning and health resources of the community, city, or nation hosting an event. Mass-Gatherings Health (MGH) is an evolving niche of prehospital care rooted in emergency medicine, emergency management, public health, and disaster medicine. To explore front-line issues related to data quality in the context of mass gatherings, the authors draw on five years of management experience with an online, mass-gathering event and patient registry, as well as clinical and operational experience amassed over several decades. Here the authors propose underlying human, environmental, and logistical factors that may contribute to poor data quality at mass gatherings, and make specific recommendations for improvement through pre-event planning, on-site actions, and post-event follow-up. The advancement of MGH research will rely on addressing factors that influence data quality and developing strategies to mitigate or enhance those factors. This is an exciting time for MGH research as higher order questions are beginning to be addressed; however, quality research must start from the ground up to ensure optimal primary data capture and quality. Guy A , Prager R , Turris S , Lund A . Improving data quality in mass-gatherings health research. Prehosp Disaster Med. 2017;32(3):329-332.

  17. Human Connectome Project Informatics: quality control, database services, and data visualization

    PubMed Central

    Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.

    2013-01-01

    The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591

  18. Enabling the Usability of Earth Science Data Products and Services by Evaluating, Describing, and Improving Data Quality throughout the Data Lifecycle

    NASA Astrophysics Data System (ADS)

    Downs, R. R.; Peng, G.; Wei, Y.; Ramapriyan, H.; Moroni, D. F.

    2015-12-01

    Earth science data products and services are being used by representatives of various science and social science disciplines, by planning and decision-making professionals, by educators and learners ranging from primary through graduate and informal education, and by the general public. The diversity of users and uses of Earth science data is gratifying and offers new challenges for enabling the usability of these data by audiences with various purposes and levels of expertise. Users and other stakeholders need capabilities to efficiently find, explore, select, and determine the applicability and suitability of data products and services to meet their objectives and information needs. Similarly, they need to be able to understand the limitations of Earth science data, which can be complex, especially when considering combined or simultaneous use of multiple data products and services. Quality control efforts of stakeholders, throughout the data lifecycle, can contribute to the usability of Earth science data to meet the needs of diverse users. Such stakeholders include study design teams, data producers, data managers and curators, archives, systems professionals, data distributors, end-users, intermediaries, sponsoring organizations, hosting institutions, and others. Opportunities for engaging stakeholders to review, describe, and improve the quality of Earth science data products and services throughout the data lifecycle are identified and discussed. Insight is shared from the development of guidelines for implementing the Group on Earth Observations (GEO) Data Management Principles, the recommendations from the Earth Science Data System Working Group (ESDSWG) on Data Quality, and the efforts of the Information Quality Cluster of the Federation of Earth Science Information Partners (ESIP). Examples and outcomes from quality control efforts of data facilities, such as scientific data centers, that contribute to the usability of Earth science data also are offered.

  19. Historical water-quality data from the Harlem River, New York

    USGS Publications Warehouse

    Fisher, Shawn C.

    2016-04-22

    Data specific to the Harlem River, New York, have been summarized and are presented in this report. The data illustrate improvements in the quality of water for the past 65 years and emphasize the importance of a continuous water-quality record for establishing trends in environmental conditions. Although there is a paucity of sediment-quality data, the New York City Department of Environmental Protection (NYCDEP) Bureau of Wastewater Treatment has maintained a water-quality monitoring network in the Harlem River (and throughout the harbor of New York City) to which 61 combined sewer outfalls discharge effluent. In cooperation with the NYCDEP, the U.S. Geological Survey evaluated water-quality data collected by the NYCDEP dating back to 1945, which indicate trends in water quality and reveal improvement following the 1972 passage of the Clean Water Act. These improvements are indicated by the steady increase in median dissolved oxygen concentrations and an overall decrease in fecal indicator bacteria concentrations starting in the late 1970s. Further, the magnitude of the highest fecal indicator bacteria concentrations (that is, the 90th percentile) in samples collected from the Harlem River have decreased significantly over the past four decades. Other parameters of water quality used to gauge the health of a water body include total suspended solids and nutrient (inorganic forms of nitrogen and phosphorus) concentrations—mean concentrations for these indicators have also decreased in the past decades. The limited sediment data available for one sample in the Harlem River indicate concentrations of copper, zinc, and lead are above sediment-quality thresholds set by the New York State Department of Environmental Conservation. However, more data are needed to better understand the changes in both sediment and water quality in the Harlem River, both as the tide cycles and during precipitation events. As a partner in the Urban Waters Federal Partnership, the U.S. Geological Survey has worked to address the chronic water-quality concerns of the Harlem River by compiling relevant data and studies, which is an important component for understanding and rectifying water-quality problems within a watershed.

  20. Analysis of ETMS Data Quality for Traffic Flow Management Decisions

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.; Sridhar, Banavar; Kim, Douglas

    2003-01-01

    The data needed for air traffic flow management decision support tools is provided by the Enhanced Traffic Management System (ETMS). This includes both the tools that are in current use and the ones being developed for future deployment. Since the quality of decision support provided by all these tools will be influenced by the quality of the input ETMS data, an assessment of ETMS data quality is needed. Motivated by this desire, ETMS data quality is examined in this paper in terms of the unavailability of flight plans, deviation from the filed flight plans, departure delays, altitude errors and track data drops. Although many of these data quality issues are not new, little is known about their extent. A goal of this paper is to document the magnitude of data quality issues supported by numerical analysis of ETMS data. Guided by this goal, ETMS data for a 24-hour period were processed to determine the number of aircraft with missing flight plan messages at any given instant of time. Results are presented for aircraft above 18,000 feet altitude and also at all altitudes. Since deviation from filed flight plan is also a major cause of trajectory-modeling errors, statistics of deviations are presented. Errors in proposed departure times and ETMS-generated vertical profiles are also shown. A method for conditioning the vertical profiles for improving demand prediction accuracy is described. Graphs of actual sector counts obtained using these vertical profiles are compared with those obtained using the Host data for sectors in the Fort Worth Center to demonstrate the benefit of preprocessing. Finally, results are presented to quantify the extent of data drops. A method for propagating track positions during ETMS data drops is also described.

  1. Cumulative uncertainty in measured streamflow and water quality data for small watersheds

    USGS Publications Warehouse

    Harmel, R.D.; Cooper, R.J.; Slade, R.M.; Haney, R.L.; Arnold, J.G.

    2006-01-01

    The scientific community has not established an adequate understanding of the uncertainty inherent in measured water quality data, which is introduced by four procedural categories: streamflow measurement, sample collection, sample preservation/storage, and laboratory analysis. Although previous research has produced valuable information on relative differences in procedures within these categories, little information is available that compares the procedural categories or presents the cumulative uncertainty in resulting water quality data. As a result, quality control emphasis is often misdirected, and data uncertainty is typically either ignored or accounted for with an arbitrary margin of safety. Faced with the need for scientifically defensible estimates of data uncertainty to support water resource management, the objectives of this research were to: (1) compile selected published information on uncertainty related to measured streamflow and water quality data for small watersheds, (2) use a root mean square error propagation method to compare the uncertainty introduced by each procedural category, and (3) use the error propagation method to determine the cumulative probable uncertainty in measured streamflow, sediment, and nutrient data. Best case, typical, and worst case "data quality" scenarios were examined. Averaged across all constituents, the calculated cumulative probable uncertainty (??%) contributed under typical scenarios ranged from 6% to 19% for streamflow measurement, from 4% to 48% for sample collection, from 2% to 16% for sample preservation/storage, and from 5% to 21% for laboratory analysis. Under typical conditions, errors in storm loads ranged from 8% to 104% for dissolved nutrients, from 8% to 110% for total N and P, and from 7% to 53% for TSS. Results indicated that uncertainty can increase substantially under poor measurement conditions and limited quality control effort. This research provides introductory scientific estimates of uncertainty in measured water quality data. The results and procedures presented should also assist modelers in quantifying the "quality"of calibration and evaluation data sets, determining model accuracy goals, and evaluating model performance.

  2. QC-ART: A tool for real-time quality control assessment of mass spectrometry-based proteomics data.

    PubMed

    Stanfill, Bryan A; Nakayasu, Ernesto S; Bramer, Lisa M; Thompson, Allison M; Ansong, Charles K; Clauss, Therese; Gritsenko, Marina A; Monroe, Matthew E; Moore, Ronald J; Orton, Daniel J; Piehowski, Paul D; Schepmoes, Athena A; Smith, Richard D; Webb-Robertson, Bobbie-Jo; Metz, Thomas O

    2018-04-17

    Liquid chromatography-mass spectrometry (LC-MS)-based proteomics studies of large sample cohorts can easily require from months to years to complete. Acquiring consistent, high-quality data in such large-scale studies is challenging because of normal variations in instrumentation performance over time, as well as artifacts introduced by the samples themselves, such as those due to collection, storage and processing. Existing quality control methods for proteomics data primarily focus on post-hoc analysis to remove low-quality data that would degrade downstream statistics; they are not designed to evaluate the data in near real-time, which would allow for interventions as soon as deviations in data quality are detected.  In addition to flagging analyses that demonstrate outlier behavior, evaluating how the data structure changes over time can aide in understanding typical instrument performance or identify issues such as a degradation in data quality due to the need for instrument cleaning and/or re-calibration.  To address this gap for proteomics, we developed Quality Control Analysis in Real-Time (QC-ART), a tool for evaluating data as they are acquired in order to dynamically flag potential issues with instrument performance or sample quality.  QC-ART has similar accuracy as standard post-hoc analysis methods with the additional benefit of real-time analysis.  We demonstrate the utility and performance of QC-ART in identifying deviations in data quality due to both instrument and sample issues in near real-time for LC-MS-based plasma proteomics analyses of a sample subset of The Environmental Determinants of Diabetes in the Young cohort. We also present a case where QC-ART facilitated the identification of oxidative modifications, which are often underappreciated in proteomic experiments. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  3. Quantitative and qualitative verification of data quality in the childbirth registers of two rural district hospitals in Western Kenya.

    PubMed

    Chiba, Yoko; Oguttu, Monica A; Nakayama, Takeo

    2012-06-01

    to verify the data quality of childbirth registers and explore factors that influence quality at two rural district hospitals in Western Kenya. a retrospective comparative case study for data quality of the 2006 childbirth registers by quantitative and qualitative methods. Siaya and Bondo District Hospitals. after confirming the physical condition and availability of childbirth registers, the total number of births; number of complete/incomplete data; and number of complete data that were illegible, incorrectly coded, inappropriate and unrecognised were verified quantitatively to evaluate accuracy and completeness. Data categories and instructions were examined qualitatively to assess the relevance, completeness and accuracy of the data. Semi-structured interviews were conducted with key informants to capture their views and factors that influence data quality. the childbirth registers used by the two hospitals were not developed by the Ministry of Health, and their supply to Bondo was interrupted. Of the 30 data categories in the registers, five for Siaya and 23 for Bondo were more than 20% incomplete. Data for number of antenatal consultations and use of human immunodeficiency virus drugs were at least 50% incomplete for both hospitals. The percentage of illegible, incorrectly coded and inappropriate data was relatively low, and only the place of residence had unrecognised data. Data categories in the registers did not correspond well with those of monthly reports, and inappropriate instructions suggested hidden inaccuracy among apparently valid data. Organisational impediments of the health information system in general, perinatal and intrapartum contexts were identified. data quality of the childbirth registers was unsatisfactory. Influential factors were primarily organisational and technical, which may have had an adverse effect on midwives' record keeping behaviour. data quality of the registers can be improved by re-examining technical challenges and organisational impediments at different levels. Midwives' awareness of data quality needs to be increased by sharing the purpose of the childbirth registers. Strong political commitment is also indispensable for putting these findings into action. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. 40 CFR 63.11221 - Is there a minimum amount of monitoring data I must obtain?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-control periods, and required monitoring system quality assurance or quality control activities including... monitoring system quality assurance or quality control activities in calculations used to report emissions or... monitoring data I must obtain? 63.11221 Section 63.11221 Protection of Environment ENVIRONMENTAL PROTECTION...

  5. 40 CFR 63.11221 - Is there a minimum amount of monitoring data I must obtain?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-control periods, and required monitoring system quality assurance or quality control activities including... monitoring system quality assurance or quality control activities in calculations used to report emissions or... monitoring data I must obtain? 63.11221 Section 63.11221 Protection of Environment ENVIRONMENTAL PROTECTION...

  6. Multi-Site Quality Assurance Project Plan for Wisconsin Public Service Corporation, Peoples Gas Light and Coke Company, and North Shore Gas

    EPA Pesticide Factsheets

    This Multi-Site QAPP presents the organization, data quality objectives (DQOs), a set of anticipated activities, sample analysis, data handling and specific Quality Assurance/Quality Control (QA/QC) procedures associated with Studies done in EPA Region 5

  7. Quality control algorithms for rainfall measurements

    NASA Astrophysics Data System (ADS)

    Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs

    2005-09-01

    One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).

  8. Assessing Metadata Quality of a Federally Sponsored Health Data Repository.

    PubMed

    Marc, David T; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn't frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality.

  9. Assessing Metadata Quality of a Federally Sponsored Health Data Repository

    PubMed Central

    Marc, David T.; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn’t frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality. PMID:28269883

  10. The 3D Elevation Program: summary for Connecticut

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  11. The 3D Elevation Program: summary for Mississippi

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The 3D Elevation Program: summary for Georgia

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  13. The 3D Elevation Program: summary for Iowa

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  14. The 3D Elevation Program: summary for Oklahoma

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  15. The 3D Elevation Program: summary for Kansas

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  16. The 3D Elevation Program: summary for Nevada

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  17. The 3D Elevation Program: summary for Illinois

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. The 3D Elevation Program: summary for Colorado

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. The 3D Elevation Program: summary for Utah

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. The 3D Elevation Program: summary for Delaware

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  1. The 3D Elevation Program: summary for Massachusetts

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  2. The 3D Elevation Program: summary for West Virginia

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  3. The 3D Elevation Program: summary for South Carolina

    USGS Publications Warehouse

    Carswell, William

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  4. The 3D Elevation Program: summary for North Carolina

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the use community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  5. The 3D Elevation Program: summary for South Dakota

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  6. The 3D Elevation Program: Summary for New Jersey

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  7. The 3D Elevation Program: summary for Washington

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  8. The 3D Elevation Program: summary for New Mexico

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 (table 1) for the conterminous United States and quality level 5 ifsar data (table 1) for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  9. The 3D Elevation Program: summary for Wyoming

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  10. The 3D Elevation Program: summary for Arizona

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  11. The 3D Elevation Program: summary for New Hampshire

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The 3D Elevation Program: summary for Pennsylvania

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  13. The 3D Elevation Program: summary for Arkansas

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  14. Data from selected U.S. Geological Survey National Stream Water-Quality Networks (WQN)

    USGS Publications Warehouse

    Alexander, Richard B.; Slack, J.R.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.; Briel, L.I.; Buttleman, K.P.

    1996-01-01

    This CD-ROM set contains data from two USGS national stream water-quality networks, the Hydrologic Benchmark Network (HBN) and the National Stream Quality Accounting Network (NASQAN), operated during the past 30 years. These networks were established to provide national and regional descriptions of stream water-quality conditions and trends, based on uniform monitoring of selected watersheds throughout the United States, and to improve our understanding of the effects of the natural environment and human activities on water quality. The HBN, consisting of 63 relatively small, minimally disturbed watersheds, provides data for investigating naturally induced changes in streamflow and water quality and the effects of airborne substances on water quality. NASQAN, consisting of 618 larger, more culturally influenced watersheds, provides information for tracking water-quality conditions in major U.S. rivers and streams.

  15. Benchmarking and audit of breast units improves quality of care

    PubMed Central

    van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.

    2013-01-01

    Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926

  16. Methods and Sources of Data Used to Develop Selected Water-Quality Indicators for Streams and Ground Water for EPA's 2007 Report on the Environment: Science Report

    USGS Publications Warehouse

    Baker, Nancy T.; Wilson, John T.; Moran, Michael J.

    2008-01-01

    The U.S. Geological Survey (USGS) was one of numerous governmental agencies, private organizations, and the academic community that provided data and interpretations for the U.S. Environmental Protection Agency?s (USEPA) 2007 Report on the Environment: Science Report. This report documents the sources of data and methods used to develop selected water?quality indicators for the 2007 edition of the report compiled by USEPA. Stream and ground?water?quality data collected nationally in a consistent manner as part of the USGS?s National Water?Quality Assessment Program (NAWQA) were provided for several water?quality indicators, including Nitrogen and Phosphorus in Streams in Agricultural Watersheds; Pesticides in Streams in Agricultural Watersheds; and Nitrate and Pesticides in Shallow Ground Water in Agricultural Watersheds. In addition, the USGS provided nitrate (nitrate plus nitrite) and phosphorus riverine load estimates calculated from water?quality and streamflow data collected as part of its National Stream Water Quality Accounting Network (NASQAN) and its Federal?State Cooperative Program for the Nitrogen and Phosphorus Discharge from Large Rivers indicator.

  17. Water Resources Data for California, Water Year 1987. Volume 5. Ground-water Data for California

    USGS Publications Warehouse

    Lamb, C.E.; Fogelman, R.P.; Grillo, D.A.

    1989-01-01

    Water resources data for the 1987 water year for California consist of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in wells. Volume 5 contains water levels for 786 observation wells and water-quality data for 168 observation wells. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating State and Federal agencies in California.

  18. Water Resources Data for California, Water Year 1986. Volume 5. Ground-Water Data for California

    USGS Publications Warehouse

    Lamb, C.E.; Keeter, G.L.; Grillo, D.A.

    1988-01-01

    Water resources data for the 1986 water year for California consist of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in wells. Volume 5 contains water levels for 765 observation wells and water-quality data for 174 observation wells. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating State and Federal agencies in California.

  19. Water Resources Data, California, Water Year 1989. Volume 5. Ground-Water Data

    USGS Publications Warehouse

    Lamb, C.E.; Johnson, J.A.; Fogelman, R.P.; Grillo, D.A.

    1990-01-01

    Water resources data for the 1989 water year for California consist of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water quality in weils. Volume 5 contains water levels for 1,037 observation wells and water-quality data for 254 monitoring wells. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperatine State and Federal agencies in California.

  20. Water Resources Data for California, Water Year 1988. Volume 5. Ground-Water Data for California

    USGS Publications Warehouse

    Lamb, C.E.; Fogelman, R.P.; Grillo, D.A.

    1989-01-01

    Water resources data for the 1988 water year for California consist of records of stage, discharge, and water quality of streams; stage and contents in lakes and reservoirs; and water levels and water-quality in wells. Volume 5 contains water levels for 980 observation wells and water-quality data for 239 observation monitoring wells. These data represent that part of the National water Data System operated by the U.S. Geological Survey and cooperating State and Federal agencies in California.

  1. SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.

    PubMed

    Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao

    2014-08-08

    Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.

  2. A novel water quality data analysis framework based on time-series data mining.

    PubMed

    Deng, Weihui; Wang, Guoyin

    2017-07-01

    The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Nursing Home Price and Quality Responses to Publicly Reported Quality Information

    PubMed Central

    Clement, Jan P; Bazzoli, Gloria J; Zhao, Mei

    2012-01-01

    Objective To assess whether the release of Nursing Home Compare (NHC) data affected self-pay per diem prices and quality of care. Data Sources Primary data sources are the Annual Survey of Wisconsin Nursing Homes for 2001–2003, Online Survey and Certification Reporting System, NHC, and Area Resource File. Study Design We estimated fixed effects models with robust standard errors of per diem self-pay charge and quality before and after NHC. Principal Findings After NHC, low-quality nursing homes raised their prices by a small but significant amount and decreased their use of restraints but did not reduce pressure sores. Mid-level and high-quality nursing homes did not significantly increase self-pay prices after NHC nor consistently change quality. Conclusions Our findings suggest that the release of quality information affected nursing home behavior, especially pricing and quality decisions among low-quality facilities. Policy makers should continue to monitor quality and prices for self-pay residents and scrutinize low-quality homes over time to see whether they are on a pathway to improve quality. In addition, policy makers should not expect public reporting to result in quick fixes to nursing home quality problems. PMID:22092366

  4. Quality-assurance data for routine water analyses by the U.S. Geological Survey laboratory in Troy, New York - July 2003 through June 2005

    USGS Publications Warehouse

    Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.

    2009-01-01

    The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2003 through June 2005. Results for the quality-control samples for 20 analytical procedures were evaluated for bias and precision. Control charts indicate that data for five of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, pH, silicon, and sodium. Seven of the analytical procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits; these procedures were: dissolved organic carbon, chloride, nitrate (ion chromatograph), nitrite, silicon, sodium, and sulfate. The calcium and magnesium procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The total aluminum and specific conductance procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 17 of 18 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 22 analytes. At least 85 percent of the samples met data-quality objectives for all analytes except total monomeric aluminum (82 percent of samples met objectives), total aluminum (77 percent of samples met objectives), chloride (80 percent of samples met objectives), fluoride (76 percent of samples met objectives), and nitrate (ion chromatograph) (79 percent of samples met objectives). The ammonium and total dissolved nitrogen did not meet the data-quality objectives. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with ratings for each sample in the satisfactory, good, and excellent ranges or less than 10 percent error. The P-sample (low-ionic-strength constituents) analysis had one marginal and two unsatisfactory ratings for the chloride procedure. The T-sample (trace constituents)analysis had two unsatisfactory ratings and one high range percent error for the aluminum procedure. The N-sample (nutrient constituents) analysis had one marginal rating for the nitrate procedure. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 84 percent of the samples met data-quality objectives for 11 of the 14 analytes; the exceptions were ammonium, total aluminum, and acid-neutralizing capacity. The ammonium procedure did not meet data quality objectives in all studies. Data-quality objectives were not met in 23 percent of samples analyzed for total aluminum and 45 percent of samples analyzed acid-neutralizing capacity. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 86 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, sodium, and sulfate. Data-quality objectives were not met by samples analyzed for fluoride. 

  5. Open Vessel Data Management (OpenVDM) - Open-source Middleware to Assist Vessel Operators Produce Consistent Cruise Data Packages for Archival and Monitor Data Quality.

    NASA Astrophysics Data System (ADS)

    Pinner, J. W., IV

    2016-02-01

    Data from shipboard oceanographic sensors come in various formats and collection typically requires multiple data acquisition software packages running on multiple workstations throughout the vessel. Technicians must then corral all or a subset of the resulting data files so that they may be used by shipboard scientists. On many vessels the process of corralling files into a single cruise data package may change from cruise to cruise or even from technician to technician. It is these inconsistencies in the final cruise data packages that pose the greatest challenge when attempting to automate the process of cataloging cruise data for submission to data archives. A second challenge with the management of shipboard data is ensuring it's quality. Problems with sensors may go unnoticed simply because the technician/scientist was unaware the data from a sensor was absent, invalid, or out of range. The Open Vessel Data Management project (OpenVDM) is a ship-wide data management solution developed to address these issues. In the past three years OpenVDM has successfully demonstrated it's ability to adapt to the needs of vessels with different capabilities/missions while delivering a consistent cruise data package to scientists and adhering to the recommendations and best practices set forth by 3rd party data management groups such as R2R. In the last year OpenVDM has implemented a plugin architecture for monitoring data quality. This allowed vessel operators to develop custom data quality tests tailored to their vessel's unique raw datasets. Data quality test are performed in near-real-time and the results are readily available within a web-interface. This plugin architecture allows 3rd party data quality workgroups like SAMOS to migrate their data quality tests to the vessel and provide immediate determination of data quality. OpenVDM is currently operating aboard three vessels. The R/V Endeavor, operated by the University of Rhode Island, is a regional-class UNOLS research vessel operating under the traditional NFS, P.I. driven model. The E/V Nautilus, operated by the Ocean Exploration Trust specializes in ROV-based, telepresence-enabled oceanographic research. The R/V Falkor operated by the Schmidt Ocean Institute is an ocean research platform focusing on cutting-edge technology development.

  6. Use of geographic information system to display water-quality data from San Juan basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorn, C.R.; Dam, W.L.

    1989-09-01

    The ARC/INFO geographic information system is creating thematic maps of the San Juan basin as part of the USGS Regional Aquifer-System Analysis program. (Use of trade names is for descriptive purposes only and does not constitute endorsement by the US Geological Survey.) Maps created by a Prime version of ARC/INFO, to be published in a series of Hydrologic Investigations Atlas reports for selected geologic units, will include outcrop patters, water-well locations, and water-quality data. The San Juan basin study area, encompassing about 19,400 mi{sup 2}, can be displayed with ARC/INFO at various scales; on the same scale, generated water-quality mapsmore » can be compared and overlain with other maps such as potentiometric surface and depth to top of a geologic or hydrologic unit. Selected water-quality and well data (including latitude and longitude) are retrieved from the USGS National Water Information System data base for a specified geologic unit. Data are formatted by Fortran programs and read into an INFO data base. Two parallel files - an INFO file containing water-quality data and well data and an ARC file containing the site coordinates - are joined to form the ARC/INFO data base. A file containing a series of commands using Prime's Command Procedure language is used to select coverage, display, and position data on the map. Data interpretation is enhanced by displaying water-quality data throughout the basin in combination with other hydrologic and geologic data.« less

  7. Water quality data for national-scale aquatic research: The Water Quality Portal

    NASA Astrophysics Data System (ADS)

    Read, Emily K.; Carr, Lindsay; De Cicco, Laura; Dugan, Hilary A.; Hanson, Paul C.; Hart, Julia A.; Kreft, James; Read, Jordan S.; Winslow, Luke A.

    2017-02-01

    xml:id="wrcr22485-sec-1001" numbered="no">Aquatic systems are critical to food, security, and society. But, water data are collected by hundreds of research groups and organizations, many of which use nonstandard or inconsistent data descriptions and dissemination, and disparities across different types of water observation systems represent a major challenge for freshwater research. To address this issue, the Water Quality Portal (WQP) was developed by the U.S. Environmental Protection Agency, the U.S. Geological Survey, and the National Water Quality Monitoring Council to be a single point of access for water quality data dating back more than a century. The WQP is the largest standardized water quality data set available at the time of this writing, with more than 290 million records from more than 2.7 million sites in groundwater, inland, and coastal waters. The number of data contributors, data consumers, and third-party application developers making use of the WQP is growing rapidly. Here we introduce the WQP, including an overview of data, the standardized data model, and data access and services; and we describe challenges and opportunities associated with using WQP data. We also demonstrate through an example the value of the WQP data by characterizing seasonal variation in lake water clarity for regions of the continental U.S. The code used to access, download, analyze, and display these WQP data as shown in the figures is included as supporting information.

  8. Turning Data Into Information: Opportunities to Advance Rehabilitation Quality, Research, and Policy.

    PubMed

    Bettger, Janet Prvu; Nguyen, Vu Q C; Thomas, J George; Guerrier, Tami; Yang, Qing; Hirsch, Mark A; Pugh, Terrence; Harris, Gabrielle; Eller, Mary Ann; Pereira, Carol; Hamm, Deanna; Rinehardt, Eric A; Shall, Matthew; Niemeier, Janet P

    2018-06-01

    Attention to health care quality and safety has increased dramatically. The internal focus of an organization is not without influence from external policy and research findings. Compared with other specialties, efforts to align and advance rehabilitation research, practice, and policy using electronic health record data are in the early stages. This special communication defines quality, applies the dimensions of quality to rehabilitation, and illustrates the feasibility and utility of electronic health record data for research on rehabilitation care quality and outcomes. Using data generated at the point of care provides the greatest opportunity for improving the quality of health care, producing generalizable evidence to inform policy and practice, and ultimately benefiting the health of the populations served. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. A Consideration of Quality-Attribute-Property for Interoperability of Quality Data

    NASA Astrophysics Data System (ADS)

    Tarumi, Shinya; Kozaki, Kouji; Kitamura, Yoshinobu; Mizoguchi, Riichiro

    Descriptions of attribute and quality are essential elements in ontology developments. Needless to say, science data are description of attributes of target things and it is an important role of ontology to support the validity of and interoperability between the description. Although some upper ontologies such as DOLCE, BFO, etc. are already developed and extensively used, a careful examination reveals some rooms for improvement of them. While each ontology covers quality and quantity, the mutual interchangeability among these ontologies is not considered because each has been designed intended to develop a ``correct'' ontology of quality and quantity. Furthermore, due to variety of ways of data description, no single ontology can cover all the existing scientific data. In this paper, we investigate ``quality'' and ``value'' from an ontological viewpoint and propose a conceptual framework to deal with attribute, property and quality appearing in existing data descriptions in the nanotechnology domain. This framework can be considered as a reference ontology for describing quality with existing upper ontology. Furthermore, on the basis of the results of the consideration, we evaluate and refine a conceptual hierarchy of materials functions which has been built by nanomaterials researchers. Through the evaluation process, we discuss an effect of the definition of a conceptual framework for building/refining ontology. Such conceptual consideration about quality and value is not only the problem in nanomaterials domain but also a first step toward advancement of an intelligent sharing of scientific data in e-Science.

  10. Nursing home consumer complaints and their potential role in assessing quality of care.

    PubMed

    Stevenson, David G

    2005-02-01

    State survey agencies collect and investigate consumer complaints for care in nursing homes and other health care settings. Complaint investigations play a key role in quality assurance, because they can respond to concerns of consumers and families. This study uses 5 years of nursing home complaints data from Massachusetts (1998-2002) to investigate whether complaints might be used to assess nursing home quality of care. The investigator matches facility-level complaints data with On-Line Survey Certification and Reporting (OSCAR) data and Minimum Data Set Quality Indicator (MDS QI) data to evaluate the association between consumer complaints, facility and resident characteristics, and other nursing home quality measures. Consumer complaints varied across facility characteristics in ways consistent with the nursing home quality literature. Complaints were consistently and significantly associated with survey deficiencies, the presence of a serious survey deficiency, and nurse aide staffing. Complaints were not significantly associated with nurse staffing, and associations with 6 MDS QIs were mixed. The number of complaints was significantly predictive of survey deficiencies identified at the subsequent inspection. Nursing home consumer complaints provide a supplemental tool with which to differentiate nursing homes on quality. Despite limitations, complaints data have potential strengths when used in combination with other quality measures. The potential of using consumer complaints to assess nursing home quality of care should be evaluated in states beyond Massachusetts. Evaluating consumer complaints also might be a productive area of inquiry for other health care settings such as hospitals and home health agencies.

  11. Data Delivery and Mapping Over the Web: National Water-Quality Assessment Data Warehouse

    USGS Publications Warehouse

    Bell, Richard W.; Williamson, Alex K.

    2006-01-01

    The U.S. Geological Survey began its National Water-Quality Assessment (NAWQA) Program in 1991, systematically collecting chemical, biological, and physical water-quality data from study units (basins) across the Nation. In 1999, the NAWQA Program developed a data warehouse to better facilitate national and regional analysis of data from 36 study units started in 1991 and 1994. Data from 15 study units started in 1997 were added to the warehouse in 2001. The warehouse currently contains and links the following data: -- Chemical concentrations in water, sediment, and aquatic-organism tissues and related quality-control data from the USGS National Water Information System (NWIS), -- Biological data for stream-habitat and ecological-community data on fish, algae, and benthic invertebrates, -- Site, well, and basin information associated with thousands of descriptive variables derived from spatial analysis, like land use, soil, and population density, and -- Daily streamflow and temperature information from NWIS for selected sampling sites.

  12. Data Quality Assurance and Control for AmeriFlux Network at CDIAC, ORNL

    NASA Astrophysics Data System (ADS)

    Shem, W.; Boden, T.; Krassovski, M.; Yang, B.

    2014-12-01

    The Carbon Dioxide Information Analysis Center (CDIAC) at the Oak Ridge National Laboratory (ORNL) serves as the long-term data repository for the AmeriFlux network. Datasets currently available include hourly or half-hourly meteorological and flux observations, biological measurement records, and synthesis data products. Currently there is a lack of standardized nomenclature and specifically designed procedures for data quality assurance/control in processing and handling micrometeorological and ecological data at individual flux sites. CDIAC's has bridged this gap by providing efficient and accurate procedures for data quality control and standardization of the results for easier assimilation by the models used in climate science. In this presentation we highlight the procedures we have put in place to scrutinize continuous flux and meteorological data within Ameriflux network. We itemize some basic data quality issues that we have observed over the past years and include some examples of typical data quality issues. Such issues, e.g., incorrect time-stamping, poor calibration or maintenance of instruments, missing or incomplete metadata and others that are commonly over-looked by PI's, invariably impact the time-series observations.

  13. Toward utilization of data for program management and evaluation: quality assessment of five years of health management information system data in Rwanda.

    PubMed

    Nisingizwe, Marie Paul; Iyer, Hari S; Gashayija, Modeste; Hirschhorn, Lisa R; Amoroso, Cheryl; Wilson, Randy; Rubyutsa, Eric; Gaju, Eric; Basinga, Paulin; Muhire, Andrew; Binagwaho, Agnès; Hedt-Gauthier, Bethany

    2014-01-01

    Health data can be useful for effective service delivery, decision making, and evaluating existing programs in order to maintain high quality of healthcare. Studies have shown variability in data quality from national health management information systems (HMISs) in sub-Saharan Africa which threatens utility of these data as a tool to improve health systems. The purpose of this study is to assess the quality of Rwanda's HMIS data over a 5-year period. The World Health Organization (WHO) data quality report card framework was used to assess the quality of HMIS data captured from 2008 to 2012 and is a census of all 495 publicly funded health facilities in Rwanda. Factors assessed included completeness and internal consistency of 10 indicators selected based on WHO recommendations and priority areas for the Rwanda national health sector. Completeness was measured as percentage of non-missing reports. Consistency was measured as the absence of extreme outliers, internal consistency between related indicators, and consistency of indicators over time. These assessments were done at the district and national level. Nationally, the average monthly district reporting completeness rate was 98% across 10 key indicators from 2008 to 2012. Completeness of indicator data increased over time: 2008, 88%; 2009, 91%; 2010, 89%; 2011, 90%; and 2012, 95% (p<0.0001). Comparing 2011 and 2012 health events to the mean of the three preceding years, service output increased from 3% (2011) to 9% (2012). Eighty-three percent of districts reported ratios between related indicators (ANC/DTP1, DTP1/DTP3) consistent with HMIS national ratios. Conclusion and policy implications: Our findings suggest that HMIS data quality in Rwanda has been improving over time. We recommend maintaining these assessments to identify remaining gaps in data quality and that results are shared publicly to support increased use of HMIS data.

  14. Toward utilization of data for program management and evaluation: quality assessment of five years of health management information system data in Rwanda

    PubMed Central

    Nisingizwe, Marie Paul; Iyer, Hari S.; Gashayija, Modeste; Hirschhorn, Lisa R.; Amoroso, Cheryl; Wilson, Randy; Rubyutsa, Eric; Gaju, Eric; Basinga, Paulin; Muhire, Andrew; Binagwaho, Agnès; Hedt-Gauthier, Bethany

    2014-01-01

    Background Health data can be useful for effective service delivery, decision making, and evaluating existing programs in order to maintain high quality of healthcare. Studies have shown variability in data quality from national health management information systems (HMISs) in sub-Saharan Africa which threatens utility of these data as a tool to improve health systems. The purpose of this study is to assess the quality of Rwanda's HMIS data over a 5-year period. Methods The World Health Organization (WHO) data quality report card framework was used to assess the quality of HMIS data captured from 2008 to 2012 and is a census of all 495 publicly funded health facilities in Rwanda. Factors assessed included completeness and internal consistency of 10 indicators selected based on WHO recommendations and priority areas for the Rwanda national health sector. Completeness was measured as percentage of non-missing reports. Consistency was measured as the absence of extreme outliers, internal consistency between related indicators, and consistency of indicators over time. These assessments were done at the district and national level. Results Nationally, the average monthly district reporting completeness rate was 98% across 10 key indicators from 2008 to 2012. Completeness of indicator data increased over time: 2008, 88%; 2009, 91%; 2010, 89%; 2011, 90%; and 2012, 95% (p<0.0001). Comparing 2011 and 2012 health events to the mean of the three preceding years, service output increased from 3% (2011) to 9% (2012). Eighty-three percent of districts reported ratios between related indicators (ANC/DTP1, DTP1/DTP3) consistent with HMIS national ratios. Conclusion and policy implications Our findings suggest that HMIS data quality in Rwanda has been improving over time. We recommend maintaining these assessments to identify remaining gaps in data quality and that results are shared publicly to support increased use of HMIS data. PMID:25413722

  15. A compilation and analysis of helicopter handling qualities data. Volume 2: Data analysis

    NASA Technical Reports Server (NTRS)

    Heffley, R. K.

    1979-01-01

    A compilation and an analysis of helicopter handling qualities data are presented. Multiloop manual control methods are used to analyze the descriptive data, stability derivatives, and transfer functions for a six degrees of freedom, quasi static model. A compensatory loop structure is applied to coupled longitudinal, lateral and directional equations in such a way that key handling qualities features are examined directly.

  16. Analysis of Station Quality Issues from EarthScope's Transportable Array

    NASA Astrophysics Data System (ADS)

    Pfeifer, C.; Barstow, N.; Busby, R.; Hafner, K.

    2008-12-01

    160 of the first 400 Earthscope USARRY transportable array (TA) stations have completed their first two-year deployment and are being moved to their next locations. Over the past 4 years the majority of stations have run with few interruptions in the transfer of real time data to the Array Network Facility (ANF) at the Univ of CA San Diego and near real time data to the IRIS Data Management System (DMS). The combination of telemetered data and dedicated people reviewing the waveforms and state of health data have revealed several conditions that can affect the data quality or cause loss of data. The data problems fall into three broad categories; station power, equipment malfunction, and communication failures. Station power issues have been implicated in several types of noise seen in the seismic data (as well as causing station failures and resultant data gaps). The most common type of equipment problem that has been found to degrade data quality is caused by sensor problems, and has affected all 3 types of sensors used in the TA to varying degrees. While communication problems can cause real time data loss, they do not cause a degradation of the quality of the data, and any gaps in the real time data due solely to communications problems are filled in later with the continuous data recorded to disk at each TA station. Over the past 4 years the TA team has recognized a number of noise sources and have made several design changes to minimize the effects on data quality. Design/procedural changes include: stopping water incursion into the stations, power conditioning, changing mass re-center voltage thresholds. Figures that demonstrate examples are provided. Changes have created better data quality and improved the station performance. Vigilance and deployment of service teams to reestablish communications, replace noisy sensors, and troubleshoot problems is also key to maintaining the high-quality TA network.

  17. Quality of ground water in Idaho

    USGS Publications Warehouse

    Yee, Johnson J.; Souza, William R.

    1987-01-01

    The major aquifers in Idaho are categorized under two rock types, sedimentary and volcanic, and are grouped into six hydrologic basins. Areas with adequate, minimally adequate, or deficient data available for groundwater-quality evaluations are described. Wide variations in chemical concentrations in the water occur within individual aquifers, as well as among the aquifers. The existing data base is not sufficient to describe fully the ground-water quality throughout the State; however, it does indicate that the water is generally suitable for most uses. In some aquifers, concentrations of fluoride, cadmium, and iron in the water exceed the U.S. Environmental Protection Agency's drinking-water standards. Dissolved solids, chloride, and sulfate may cause problems in some local areas. Water-quality data are sparse in many areas, and only general statements can be made regarding the areal distribution of chemical constituents. Few data are available to describe temporal variations of water quality in the aquifers. Primary concerns related to special problem areas in Idaho include (1) protection of water quality in the Rathdrum Prairie aquifer, (2) potential degradation of water quality in the Boise-Nampa area, (3) effects of widespread use of drain wells overlying the eastern Snake River Plain basalt aquifer, and (4) disposal of low-level radioactive wastes at the Idaho National Engineering Laboratory. Shortcomings in the ground-water-quality data base are categorized as (1) multiaquifer sample inadequacy, (2) constituent coverage limitations, (3) baseline-data deficiencies, and (4) data-base nonuniformity.

  18. A Novel Scoring Metrics for Quality Assurance of Ocean Color Observations

    NASA Astrophysics Data System (ADS)

    Wei, J.; Lee, Z.

    2016-02-01

    Interpretation of the ocean bio-optical properties from ocean color observations depends on the quality of the ocean color data, specifically the spectrum of remote sensing reflectance (Rrs). The in situ and remotely measured Rrs spectra are inevitably subject to errors induced by instrument calibration, sea-surface correction and atmospheric correction, and other environmental factors. Great efforts have been devoted to the ocean color calibration and validation. Yet, there exist no objective and consensus criteria for assessment of the ocean color data quality. In this study, the gap is filled by developing a novel metrics for such data quality assurance and quality control (QA/QC). This new QA metrics is not intended to discard "suspicious" Rrs spectra from available datasets. Rather, it takes into account the Rrs spectral shapes and amplitudes as a whole and grades each Rrs spectrum. This scoring system is developed based on a large ensemble of in situ hyperspectral remote sensing reflectance data measured from various aquatic environments and processed with robust procedures. This system is further tested with the NASA bio-Optical Marine Algorithm Data set (NOMAD), with results indicating significant improvements in the estimation of bio-optical properties when Rrs spectra marked with higher quality assurance are used. This scoring system is further verified with simulated data and satellite ocean color data in various regions, and we envision higher quality ocean color products with the implementation of such a quality screening system.

  19. The Victorian Lung Cancer Registry pilot: improving the quality of lung cancer care through the use of a disease quality registry.

    PubMed

    Stirling, Rob G; Evans, S M; McLaughlin, P; Senthuren, M; Millar, J; Gooi, J; Irving, L; Mitchell, P; Haydon, A; Ruben, J; Conron, M; Leong, T; Watkins, N; McNeil, J J

    2014-10-01

    Lung cancer remains a major disease burden in Victoria (Australia) and requires a complex and multidisciplinary approach to ensure optimal care and outcomes. To date, no uniform mechanism is available to capture standardized population-based outcomes and thereby provide benchmarking. The establishment of such a data platform is, therefore, a primary requisite to enable description of process and outcome in lung cancer care and to drive improvement in the quality of care provided to individuals with lung cancer. A disease quality registry pilot has been established to capture prospective data on all adult patients with clinical or tissue diagnoses of small cell and non-small cell lung cancer. Steering and management committees provide clinical governance and supervise quality indicator selection. Quality indicators were selected following extensive literature review and evaluation of established clinical practice guidelines. A minimum dataset has been established and training and data capture by data collectors is facilitated using a web-based portal. Case ascertainment is established by regular institutional reporting of ICD-10 discharge coding. Recruitment is optimized by provision of opt-out consent. The collection of a standardized minimum data set optimizes capacity for harmonized population-based data capture. Data collection has commenced in a variety of settings reflecting metropolitan and rural, and public, and private health care institutions. The data set provides scope for the construction of a risk-adjusted model for outcomes. A data access policy and a mechanism for escalation policy for outcome outliers has been established. The Victorian Lung Cancer Registry provides a unique capacity to provide and confirm quality assessment in lung cancer and to drive improvement in quality of care across multidisciplinary stakeholders.

  20. A Survey Data Quality Strategy: The Institutional Research Perspective. IR Applications, Volume 34

    ERIC Educational Resources Information Center

    Liu, Qin

    2012-01-01

    This discussion constructs a survey data quality strategy for institutional researchers in higher education in light of total survey error theory. It starts with describing the characteristics of institutional research and identifying the gaps in literature regarding survey data quality issues in institutional research and then introduces the…

  1. 77 FR 62147 - Approval and Promulgation of Air Quality Implementation Plans; Pennsylvania; Pittsburgh-Beaver...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-12

    ... completeness requirement for one or more quarters during 2009-2011. EPA addressed the missing data of each of...-assured, quality-controlled and certified ambient air monitoring data for the 2008-2010 and 2009-2011...- assured, quality-controlled, and certified monitoring data for the 2007-2009 monitoring period, that the...

  2. A Survey Data Quality Strategy: The Institutional Research Perspective

    ERIC Educational Resources Information Center

    Liu, Qin

    2009-01-01

    This paper intends to construct a survey data quality strategy for institutional researchers in higher education in light of total survey error theory. It starts with describing the characteristics of institutional research and identifying the gaps in literature regarding survey data quality issues in institutional research. Then it is followed by…

  3. An examination of data quality on QSAR Modeling in regards to the environmental sciences (UNC-CH talk)

    EPA Science Inventory

    The development of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to...

  4. Statistical corruption in Beijing's air quality data has likely ended in 2012

    NASA Astrophysics Data System (ADS)

    Stoerk, Thomas

    2016-02-01

    This research documents changes in likely misreporting in official air quality data from Beijing for the years 2008-2013. It is shown that, consistent with prior research, the official Chinese data report suspiciously few observations that exceed the politically important Blue Sky Day threshold, a particular air pollution level used to evaluate local officials, and an excess of observations just below that threshold. Similar data, measured by the US Embassy in Beijing, do not show this irregularity. To document likely misreporting, this analysis proposes a new way of comparing air quality data via Benford's Law, a statistical regularity known to fit air pollution data. Using this method to compare the official data to the US Embassy data for the first time, I find that the Chinese data fit Benford's Law poorly until a change in air quality measurements at the end of 2012. From 2013 onwards, the Chinese data fit Benford's Law closely. The US Embassy data, by contrast, exhibit no variation over time in the fit with Benford's Law, implying that the underlying pollution processes remain unchanged. These findings suggest that misreporting of air quality data for Beijing has likely ended in 2012. Additionally, I use aerosol optical density data to show the general applicability of this method of detecting likely misreporting in air pollution data.

  5. Quality Assurance for Essential Climate Variables

    NASA Astrophysics Data System (ADS)

    Folkert Boersma, K.; Muller, Jan-Peter

    2015-04-01

    Satellite data are of central interest to the QA4ECV project. Satellites have revolutionized the Earth's observation system of climate change and air quality over the past three decades, providing continuous data for the entire Earth. However, many users of these data are lost in the fog as to the quality of these satellite data. Because of this, the European Union expressed in its 2013 FP7 Space Research Call a need for reliable, traceable, and understandable quality information on satellite data records that could serve as a blueprint contribution to a future Copernicus Climate Change Service. The potential of satellite data to benefit climate change and air quality services is too great to be ignored. QA4ECV therefore bridges the gap between end-users of satellite data and the satellite data products. We are developing an internationally acceptable Quality Assurance (QA) framework that provides understandable and traceable quality information for satellite data used in climate and air quality services. Such a framework should deliver the historically linked long-term data sets that users need, in a format that they can readily use. QA4ECV has approached more than 150 users and suppliers of satellite data to collect their needs and expectations. The project will use their response as a guideline for developing user-friendly tools to obtain information on the completeness, accuracy, and fitness-for-purpose of the satellite datasets. QA4ECV collaborates with 4 joint FP7 Space projects in reaching out to scientists, policy makers, and other end-users of satellite data to improve understanding of the special challenges -and also opportunities- of working with satellite data for climate and air quality purposes. As a demonstration of its capacity, QA4ECV will generate multi-decadal climate data records for 3 atmospheric ECV precursors (nitrogen dioxide, formaldehyde, and carbon monoxide) and 3 land ECVs (albedo, leaf area index and absorbed photosynthetically active radiation), with full uncertainty metrics for every pixel. Multi-use tools and SI/community reference standards will be developed. But QA4ECV is not only about satellites. It is also about exploiting independent reference data obtained from in situ networks, and applying these data with the right, traceable methodologies for quality assurance of the satellite ECVs. The QA4ECV project started in January 2014, as a partnership between 17 research institutes from 7 different European countries working together for a period of 4 years. All QA4ECV partners are closely involved in projects improving, validating, and using satellite data. We hope that QA4ECV will be a major step forward in providing quality assured long-term climate data records that are relevant for policy and climate change assessments. A detailed description of the project can be found at http://qa4ecv.eu.

  6. A design of wireless sensor networks for a power quality monitoring system.

    PubMed

    Lim, Yujin; Kim, Hak-Man; Kang, Sanggil

    2010-01-01

    Power grids deal with the business of generation, transmission, and distribution of electric power. Recently, interest in power quality in electrical distribution systems has increased rapidly. In Korea, the communication network to deliver voltage, current, and temperature measurements gathered from pole transformers to remote monitoring centers employs cellular mobile technology. Due to high cost of the cellular mobile technology, power quality monitoring measurements are limited and data gathering intervals are large. This causes difficulties in providing the power quality monitoring service. To alleviate the problems, in this paper we present a communication infrastructure to provide low cost, reliable data delivery. The communication infrastructure consists of wired connections between substations and monitoring centers, and wireless connections between pole transformers and substations. For the wireless connection, we employ a wireless sensor network and design its corresponding data forwarding protocol to improve the quality of data delivery. For the design, we adopt a tree-based data forwarding protocol in order to customize the distribution pattern of the power quality information. We verify the performance of the proposed data forwarding protocol quantitatively using the NS-2 network simulator.

  7. Water resources data, Idaho, 2003; Volume 3. Ground water records

    USGS Publications Warehouse

    Campbell, A.M.; Conti, S.N.; O'Dell, I.

    2003-01-01

    Water resources data for the 2003 water year for Idaho consists of records of stage, discharge, and water quality of streams; stage, contents, and water quality of lakes and reservoirs; discharge of irrigation diversions; and water levels and water quality of groundwater. The three volumes of this report contain discharge records for 208 stream-gaging stations and 14 irrigation diversions; stage only records for 6 stream-gaging stations; stage only for 6 lakes and reservoirs; contents only for 13 lakes and reservoirs; water-quality for 50 stream-gaging stations and partial record sites, 3 lakes sites, and 398 groundwater wells; and water levels for 427 observation network wells and 900 special project wells. Additional water data were collected at various sites not involved in the systematic data collection program and are published as miscellaneous measurements. Volumes 1 & 2 contain the surface-water and surface-water-quality records. Volume 3 contains the ground-water and ground-water-quality records. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating State and Federal agencies in Idaho, adjacent States, and Canada.

  8. Water resources data, Idaho, 2004; Volume 3. Ground water records

    USGS Publications Warehouse

    Campbell, A.M.; Conti, S.N.; O'Dell, I.

    2005-01-01

    Water resources data for the 2004 water year for Idaho consists of records of stage, discharge, and water quality of streams; stage, contents, and water quality of lakes and reservoirs; discharge of irrigation diversions; and water levels and water quality of groundwater. The three volumes of this report contain discharge records for 209 stream-gaging stations and 8 irrigation diversions; stage only records for 6 stream-gaging stations; stage only for 6 lakes and reservoirs; contents only for 13 lakes and reservoirs; water-quality for 39 stream-gaging stations and partial record sites, 18 lakes sites, and 395 groundwater wells; and water levels for 425 observation network wells. Additional water data were collected at various sites not involved in the systematic data collection program and are published as miscellaneous measurements. Volumes 1 & 2 contain the surface-water and surface-water-quality records. Volume 3 contains the ground-water and ground-water-quality records. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating State and Federal agencies in Idaho, adjacent States, and Canada.

  9. Rheumatology Informatics System for Effectiveness: A National Informatics-Enabled Registry for Quality Improvement.

    PubMed

    Yazdany, Jinoos; Bansback, Nick; Clowse, Megan; Collier, Deborah; Law, Karen; Liao, Katherine P; Michaud, Kaleb; Morgan, Esi M; Oates, James C; Orozco, Catalina; Reimold, Andreas; Simard, Julia F; Myslinski, Rachel; Kazi, Salahuddin

    2016-12-01

    The Rheumatology Informatics System for Effectiveness (RISE) is a national electronic health record (EHR)-enabled registry. RISE passively collects data from EHRs of participating practices, provides advanced quality measurement and data analytic capacities, and fulfills national quality reporting requirements. Here we report the registry's architecture and initial data, and we demonstrate how RISE is being used to improve the quality of care. RISE is a certified Centers for Medicare and Medicaid Services Qualified Clinical Data Registry, allowing collection of data without individual patient informed consent. We analyzed data between October 1, 2014 and September 30, 2015 to characterize initial practices and patients captured in RISE. We also analyzed medication use among rheumatoid arthritis (RA) patients and performance on several quality measures. Across 55 sites, 312 clinicians contributed data to RISE; 72% were in group practice, 21% in solo practice, and 7% were part of a larger health system. Sites contributed data on 239,302 individuals. Among the subset with RA, 34.4% of patients were taking a biologic or targeted synthetic disease-modifying antirheumatic drug (DMARD) at their last encounter, and 66.7% were receiving a nonbiologic DMARD. Examples of quality measures include that 55.2% had a disease activity score recorded, 53.6% a functional status score, and 91.0% were taking a DMARD in the last year. RISE provides critical infrastructure for improving the quality of care in rheumatology and is a unique data source to generate new knowledge. Data validation and mapping are ongoing and RISE is available to the research and clinical communities to advance rheumatology. © 2016, American College of Rheumatology.

  10. Systematic adaptation of data delivery

    DOEpatents

    Bakken, David Edward

    2016-02-02

    This disclosure describes, in part, a system management component for use in a power grid data network to systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription and the system management component may adjust the data rates in real-time to ensure that the power grid data network does not become overloaded and/or fail. In one example, subscriptions with lower priorities may have their quality of service adjusted before subscriptions with higher priorities. In each instance, the quality of service may be maintained, even if reduced, to meet or exceed the minimum acceptable quality of service for the subscription.

  11. Water Resources Data, Alaska, Water Year 2000

    USGS Publications Warehouse

    Meyer, D.F.; Hess, D.L.; Schellekens, M.F.; Smith, C.W.; Snyder, E.F.; Solin, G.L.

    2001-01-01

    Water-resources data for the 2000 water year for Alaska consists of records of stage, discharge, and water quality of streams; stages of lakes; and water levels and water quality of ground-water wells. This volume contains records for water discharge at 106 gaging stations; stage or contents only at 4 gaging stations; water quality at 31 gaging stations; and water levels for 30 observation wells and 1 water-quality well. Also included are data for 47 crest-stage partial-record stations. Additional water data were collected at various sites not involved in the systematic data-collection program and are published as miscellaneous measurements and analyses. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating State and Federal agencies in Alaska.

  12. Water-quality, streamflow, and meteorological data for the Tualatin River Basin, Oregon, 1991-93

    USGS Publications Warehouse

    Doyle, M.C.; Caldwell, J.M.

    1996-01-01

    Surface-water-quality data, ground-water-quality data, streamflow data, field measurements, aquatic-biology data, meteorological data, and quality-assurance data were collected in the Tualatin River Basin from 1991 to 1993 by the U.S. Geological Survey (USGS) and the Unified Sewerage Agency of Washington County, Oregon (USA). The data from that study, which are part of this report, are presented in American Standard Code for Information Interchange (ASCII) format in subject-specific data files on a Compact Disk-Read Only Memory (CD-ROM). The text of this report describes the objectives of the study, the location of sampling sites, sample-collection and processing techniques, equipment used, laboratory analytical methods, and quality-assurance procedures. The data files on CD-ROM contain the analytical results of water samples collected in the Tualatin River Basin, streamflow measurements of the main-stem Tualatin River and its major tributaries, flow data from the USA wastewater-treatment plants, flow data from stations that divert water from the main-stem Tualatin River, aquatic-biology data, and meteorological data from the Tualatin Valley Irrigation District (TVID) Agrimet Weather Station located in Verboort, Oregon. Specific information regarding the contents of each data file is given in the text. The data files use a series of letter codes that distinguish each line of data. These codes are defined in data tables accompanying the text. Presenting data on CD-ROM offers several advantages: (1) the data can be accessed easily and manipulated by computers, (2) the data can be distributed readily over computer networks, and (3) the data may be more easily transported and stored than a large printed report. These data have been used by the USGS to (1) identify the sources, transport, and fate of nutrients in the Tualatin River Basin, (2) quantify relations among nutrient loads, algal growth, low dissolved-oxygen concentrations, and high pH, and (3) develop and calibrate a water- quality model that allows managers to test options for alleviating water-quality problems.

  13. Report of the international workshop on quality control of monthly climate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-12-31

    The National Climatic Data Center (NCDC), the US Department of Energy`s Carbon Dioxide Information Analysis Center, and the World Meteorological Organization (WMO) cosponsored an international quality control workshop for monthly climate data, October 5--6, 1993, at NCDC. About 40 scientists from around the world participated. The purpose of the meeting was to discuss and compare various quality control methods and to draft recommendations concerning the most successful systems. The near-term goal to improve quality control of CLIMAT messages for the NCDC/WMO publication Monthly Climatic Data for the World was sucessfully met. An electronic bulletin board was established to post errorsmore » and corrections. Improved communications among Global Telecommunication System hubs will be implemented. Advanced quality control algorithms were discussed and improvements were suggested. Further data exchanges were arranged.« less

  14. Solar Resource & Meteorological Assessment Project (SOLRMAP): Southwest Solar Research Park (Formerly SolarCAT) Rotating Shadowband Radiometer (RSR); Phoenix, Arizona (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2010-09-27

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  15. Water-quality, bed-sediment, and biological data (October 1992 through September 1993) and statistical summaries of water-quality data (March 1985 through September 1993) for streams in the upper Clark Fork basin, Montana

    USGS Publications Warehouse

    Lambing, John H.

    1994-01-01

    Water, bed sediment, and biota were sampled in streams from Butte to below Missoula as part of a program to characterize aquatic resources in the upper Clark Fork basin of western Montana. Water-quality data were obtained periodically at 16 stations during October 1992 through September 1993 (water year 1993); daily suspended-sediment data were obtained at six of these stations. Bed-sediment and biological data were obtained at 11 stations in August 1993. Sampling stations were located on the Clark Fork and major tributaries. The primary constituents analyzed were trace elements associated with mine tailings from historic mining and smelting activities. Water-quality data include concentra- tions of major ions, trace elements, and suspended sediment in samples collected periodically during water year 1993. A statistical summary of water- quality data is provided for the period of record at each station since 1985. Daily values of streamflow, suspended-sediment concentration, and suspended-sediment discharge are given for six stations. Bed-sediment data include trace- element concentrations in the fine and bulk fractions. Biological data include trace-element concentrations in whole-body tissue of aquatic benthic insects. Quality-assurance data are reported for analytical results of water, bed sediment, and biota.

  16. 39 CFR 3050.42 - Proceedings to improve the quality of financial data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Proceedings to improve the quality of financial... § 3050.42 Proceedings to improve the quality of financial data. The Commission may, on its own motion or on request of an interested party, initiate proceedings to improve the quality, accuracy, or...

  17. Soprano and source: A laryngographic analysis

    NASA Astrophysics Data System (ADS)

    Bateman, Laura Anne

    2005-04-01

    Popular music in the 21st century uses a particular singing quality for female voice that is quite different from the trained classical singing quality. Classical quality has been the subject of a vast body of research, whereas research that deals with non-classical qualities is limited. In order to learn more about these issues, the author chose to do research on singing qualities using a variety of standard voice quality tests. This paper looks at voice qualities found in various different styles of singing: Classical, Belt, Legit, R&B, Jazz, Country, and Pop. The data was elicited from a professional soprano and the voice qualities reflect industry standards. The data set for this paper is limited to samples using the vowel [i]. Laryngographic (LGG) data was generated simultaneously with the audio samples. This paper will focus on the results of the LGG analysis; however, an audio analysis was also performed using Spectrogram, LPC, and FFT. Data from the LGG is used to calculate the contact quotient, speed quotient, and ascending slope. The LGG waveform is also visually assessed. The LGG analysis gives insights into the source vibration for the different singing styles.

  18. A data-driven approach to quality risk management.

    PubMed

    Alemayehu, Demissie; Alvir, Jose; Levenstein, Marcia; Nickerson, David

    2013-10-01

    An effective clinical trial strategy to ensure patient safety as well as trial quality and efficiency involves an integrated approach, including prospective identification of risk factors, mitigation of the risks through proper study design and execution, and assessment of quality metrics in real-time. Such an integrated quality management plan may also be enhanced by using data-driven techniques to identify risk factors that are most relevant in predicting quality issues associated with a trial. In this paper, we illustrate such an approach using data collected from actual clinical trials. Several statistical methods were employed, including the Wilcoxon rank-sum test and logistic regression, to identify the presence of association between risk factors and the occurrence of quality issues, applied to data on quality of clinical trials sponsored by Pfizer. ONLY A SUBSET OF THE RISK FACTORS HAD A SIGNIFICANT ASSOCIATION WITH QUALITY ISSUES, AND INCLUDED: Whether study used Placebo, whether an agent was a biologic, unusual packaging label, complex dosing, and over 25 planned procedures. Proper implementation of the strategy can help to optimize resource utilization without compromising trial integrity and patient safety.

  19. Engaging clinical nurses in quality and performance improvement activities.

    PubMed

    Albanese, Madeline P; Evans, Dietra A; Schantz, Cathy A; Bowen, Margaret; Disbot, Maureen; Moffa, Joseph S; Piesieski, Patricia; Polomano, Rosemary C

    2010-01-01

    Nursing performance measures are an integral part of quality initiatives in acute care; however, organizations face numerous challenges in developing infrastructures to support quality improvement processes and timely dissemination of outcomes data. At the Hospital of the University of Pennsylvania, a Magnet-designated organization, extensive work has been conducted to incorporate nursing-related outcomes in the organization's quality plan and to integrate roles for clinical nurses into the Department of Nursing and organization's core performance-based programs. Content and strategies that promote active involvement of nurses and prepare them to be competent and confident stakeholders in quality initiatives are presented. Engaging clinical nurses in the work of quality and performance improvement is essential to achieving excellence in clinical care. It is important to have structures and processes in place to bring meaningful data to the bedside; however, it is equally important to incorporate outcomes into practice. When nurses are educated about performance and quality measures, are engaged in identifying outcomes and collecting meaningful data, are active participants in disseminating quality reports, and are able to recognize the value of these activities, data become one with practice.

  20. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    USGS Publications Warehouse

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.

  1. Exploring practical approaches to maximising data quality in electronic healthcare records in the primary care setting and associated benefits. Report of panel-led discussion held at SAPC in July 2014.

    PubMed

    Dungey, Sheena; Glew, Simon; Heyes, Barbara; Macleod, John; Tate, A Rosemary

    2016-09-01

    Electronic healthcare records provide information about patient care over time which not only affords the opportunity to improve patient care directly through effective monitoring and identification of care requirements but also offers a unique platform for both clinical and service-model research essential to the longer-term development of the health service. The quality of the recorded data can, however, be variable and can compromise the validity of data use both for primary and secondary purposes. In order to explore the challenges and benefits of and approaches to recording high quality primary care electronic records, a Clinical Practice Research Datalink (CPRD) sponsored workshop was held at the Society of Academic Primary Care (SAPC) conference in 2014 with the aim of engaging GPs and other data users. The workshop was held as a structured discussion, led by an expert panel and focused around three questions: (1) What are the data quality priorities for clinicians and researchers? How do these priorities differ or overlap? (2) What challenges might GPs face in provision of good data quality both for treating their patients and for research? Do these aims conflict? (3) What tools (such as data metrics and visualisations or software components) could assist the GP in improving data quality and patient management and could this tie in with analytical processes occurring at the research stage? The discussion highlighted both overlap and differences in the perceived data quality priorities and challenges for different user groups. Five key areas of focus were agreed upon and recommendations determined for moving forward in improving quality. The importance of good high quality electronic healthcare records has been set forth along with the need for a practical user-considered and collaborative approach to its improvement.

  2. Using IT to improve quality at NewYork-Presybterian Hospital: a requirements-driven strategic planning process.

    PubMed

    Kuperman, Gilad J; Boyer, Aurelia; Cole, Curt; Forman, Bruce; Stetson, Peter D; Cooper, Mary

    2006-01-01

    At NewYork-Presbyterian Hospital, we are committed to the delivery of high quality care. We have implemented a strategic planning process to determine the information technology initiatives that will best help us improve quality. The process began with the creation of a Clinical Quality and IT Committee. The Committee identified 2 high priority goals that would enable demonstrably high quality care: 1) excellence at data warehousing, and 2) optimal use of automated clinical documentation to capture encounter-related quality and safety data. For each high priority goal, a working group was created to develop specific recommendations. The Data Warehousing subgroup has recommended the implementation of an architecture management process and an improved ability for users to get access to aggregate data. The Structured Documentation subgroup is establishing recommendations for a documentation template creation process. The strategic planning process at times is slow, but assures that the organization is focusing on the information technology activities most likely to lead to improved quality.

  3. Using IT to Improve Quality at NewYork-Presybterian Hospital: A Requirements-Driven Strategic Planning Process

    PubMed Central

    Kuperman, Gilad J.; Boyer, Aurelia; Cole, Curt; Forman, Bruce; Stetson, Peter D.; Cooper, Mary

    2006-01-01

    At NewYork-Presbyterian Hospital, we are committed to the delivery of high quality care. We have implemented a strategic planning process to determine the information technology initiatives that will best help us improve quality. The process began with the creation of a Clinical Quality and IT Committee. The Committee identified 2 high priority goals that would enable demonstrably high quality care: 1) excellence at data warehousing, and 2) optimal use of automated clinical documentation to capture encounter-related quality and safety data. For each high priority goal, a working group was created to develop specific recommendations. The Data Warehousing subgroup has recommended the implementation of an architecture management process and an improved ability for users to get access to aggregate data. The Structured Documentation subgroup is establishing recommendations for a documentation template creation process. The strategic planning process at times is slow, but assures that the organization is focusing on the information technology activities most likely to lead to improved quality. PMID:17238381

  4. The Airline Quality Rating 2003

    NASA Technical Reports Server (NTRS)

    Bowen, Brent D.; Headley, Dean E.

    2003-01-01

    The Airline Quality Rating (AQR) was developed and first announced in early 1991 as an objective method of comparing airline quality on combined multiple performance criteria. This current report, the Airline Quality Rating 2003, reflects monthly Airline Quality Rating scores for 2002. AQR scores for the calendar year 2002 are based on 15 elements that focus on airline performance areas important to air travel consumers. The Airline Quality Rating 2003 is a summary of month-by-month quality ratings for the 10 largest U.S. airlines operating during 2002. Using the Airline Quality Rating system of weighted averages and monthly performance data in the areas of ontime arrivals, involuntary denied boardings, mishandled baggage, and a combination of 12 customer complaint categories, airlines comparative performance for the calendar year of 2002 is reported. This research monograph contains a brief summary of the AQR methodology, detailed data and charts that track comparative quality for domestic airline operations for the 12-month period of 2002, and industry average results. Also, comparative Airline Quality Rating data for 2001 are included for each airline to provide historical perspective regarding performance quality in the industry.

  5. The Airline Quality Rating 2002

    NASA Technical Reports Server (NTRS)

    Bowen, Brent D.; Headley, Dean E.

    2002-01-01

    The Airline Quality Rating (AQR) was developed and first announced in early 1991 as an objective method of comparing airline quality on combined multiple performance criteria. This current report, Airline Quality Rating 2002, reflects monthly Airline Quality Rating scores for 2001. AQR scores for the calendar year 2001 are based on 15 elements that focus on airline performance areas important to air travel consumers. The Airline Quality Rating 2002 is a summary of month-by-month quality ratings for the 11 largest U.S. airlines operating during 2001. Using the Airline Quality Rating system of weighted averages and monthly performance data in the areas of on-time arrivals, involuntary denied boardings, mishandled baggage, and a combination of 12 customer complaint categories, airlines comparative performance for the calendar year of 2001 is reported. This research monograph contains a brief summary of the AQR methodology, detailed data and charts that track comparative quality for domestic airline operations for the 12-month period of 2001, and industry average results. Also, comparative Airline Quality Rating data for 2000 are included for each airline to provide historical perspective regarding performance quality in the industry.

  6. The Airline Quality Rating 2001

    NASA Technical Reports Server (NTRS)

    Bowen, Brent D.; Headley, Dean E.

    2001-01-01

    The Airline Quality Rating (AQR) was developed and first announced in early 1991 as an objective method of comparing airline quality on combined multiple performance criteria. This current report, Airline Quality Rating 2001, reflects monthly Airline Quality Rating scores for 2000. AQR scores for the calendar year 2000 are based on 15 elements that focus on airline performance areas important to air travel consumers. The Airline Quality Rating 2001 is a summary of month-by-month quality ratings for the ten major U.S. airlines operating during 2000. Using the Airline Quality Rating system of weighted averages and monthly performance data in the areas of on-time arrivals, involuntary denied boardings, mishandled baggage, and a combination of 12 customer complaint categories, major airlines comparative performance for the calendar year of 2000 is reported. This research monograph contains a brief summary of the AQR methodology, detailed data and charts that track comparative quality for major airlines domestic operations for the 12 month period of 2000, and industry average results. Also, comparative Airline Quality Rating data for 1999 are included for each airline to provide historical perspective regarding performance quality in the industry.

  7. Water resources data Virginia water year 2005 Volume 2. Ground-water level and ground-water quality records

    USGS Publications Warehouse

    Wicklein, Shaun M.; Powell, Eugene D.; Guyer, Joel R.; Owens, Joseph A.

    2006-01-01

    Water-resources data for the 2005 water year for Virginia consist of records of water levels and water quality of ground-water wells. This report (Volume 2. Ground-Water-Level and Ground-Water-Quality Records) contains water levels at 349 observation wells and water quality at 29 wells. Locations of these wells are shown on figures 3 through 8. The data in this report represent that part of the National Water Data System collected by the U.S. Geological Survey and cooperating State and Federal agencies in Virginia.

  8. Use of quality measures for Medicaid behavioral health services by state agencies: implications for health care reform.

    PubMed

    Seibert, Julie; Fields, Suzanne; Fullerton, Catherine Anne; Mark, Tami L; Malkani, Sabrina; Walsh, Christine; Ehrlich, Emily; Imshaug, Melina; Tabrizi, Maryam

    2015-06-01

    The structure-process-outcome quality framework espoused by Donabedian provides a conceptual way to examine and prioritize behavioral health quality measures used by states. This report presents an environmental scan of the quality measures and satisfaction surveys that state Medicaid managed care and behavioral health agencies used prior to Medicaid expansion in 2014. Data were collected by reviewing online documents related to Medicaid managed care contracts for behavioral health, quality strategies, quality improvement plans, quality and performance indicators data, annual outcomes reports, performance measure specification manuals, legislative reports, and Medicaid waiver requests for proposals. Information was publicly available for 29 states. Most states relied on process measures, along with some structure and outcome measures. Although all states reported on at least one process measure of behavioral health quality, 52% of states did not use any outcomes measures and 48% of states had no structure measures. A majority of the states (69%) used behavioral health measures from the National Committee for Quality Assurance's Healthcare Effectiveness Data and Information Set, and all but one state in the sample (97%) used consumer experience-of-care surveys. Many states supplemented these data with locally developed behavioral health indicators that rely on administrative and nonadministrative data. State Medicaid agencies are using nationally recognized as well as local measures to assess quality of behavioral health care. Findings indicate a need for additional nationally endorsed measures in the area of substance use disorders and treatment outcomes.

  9. Analysis of trends in water-quality data for water conservation area 3A, the Everglades, Florida

    USGS Publications Warehouse

    Mattraw, H.C.; Scheidt, D.J.; Federico, A.C.

    1987-01-01

    Rainfall and water quality data bases from the South Florida Water Management District were used to evaluate water quality trends at 10 locations near or in Water Conservation Area 3A in The Everglades. The Seasonal Kendall test was applied to specific conductance, orthophosphate-phosphorus, nitrate-nitrogen, total Kjeldahl nitrogen, and total nitrogen regression residuals for the period 1978-82. Residuals of orthophosphate and nitrate quadratic models, based on antecedent 7-day rainfall at inflow gate S-11B, were the only two constituent-structure pairs that showed apparent significant (p < 0.05) increases in constituent concentrations. Elimination of regression models with distinct residual patterns and data outlines resulted in 17 statistically significant station water quality combinations for trend analysis. No water quality trends were observed. The 1979 Memorandum of Agreement outlining the water quality monitoring program between the Everglades National Park and the U.S. Army Corps of Engineers stressed collection four times a year at three stations, and extensive coverage of water quality properties. Trend analysis and other rigorous statistical evaluation programs are better suited to data monitoring programs that include more frequent sampling and that are organized in a water quality data management system. Pronounced areal differences in water quality suggest that a water quality monitoring system for Shark River Slough in Everglades National Park include collection locations near the source of inflow to Water Conservation Area 3A. (Author 's abstract)

  10. Quality-Assurance Data for Routine Water Analyses by the U.S. Geological Survey Laboratory in Troy, New York--July 1999 through June 2001

    USGS Publications Warehouse

    Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.

    2006-01-01

    The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's LabMaster data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality-control samples analyzed from July 1999 through June 2001. Results for the quality-control samples for 18 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, total aluminum, calcium, chloride and nitrate (ion chromatography and colormetric method) and sulfate. The total aluminum and dissolved organic carbon procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits. The calcium and specific conductance procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The magnesium procedure was biased for the high-concentration and low concentration samples, but was within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 14 of 15 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 17 of the 18 analytes. At least 90 percent of the samples met data-quality objectives for all analytes except ammonium (81 percent of samples met objectives), chloride (75 percent of samples met objectives), and sodium (86 percent of samples met objectives). Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with most ratings for each sample in the good to excellent range. The P-sample (low-ionic-strength constituents) analysis had one satisfactory rating for the specific conductance procedure in one study. The T-sample (trace constituents) analysis had one satisfactory rating for the aluminum procedure in one study and one unsatisfactory rating for the sodium procedure in another. The remainder of the samples had good or excellent ratings for each study. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 89 percent of the samples met data-quality objectives for 10 of the 14 analytes; the exceptions were ammonium, total aluminum, dissolved organic carbon, and sodium. Results indicate a positive bias for the ammonium procedure in all studies. Data-quality objectives were not met in 50 percent of samples analyzed for total aluminum, 38 percent of samples analyzed for dissolved organic carbon, and 27 percent of samples analyzed for sodium. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 91 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, and sulfate. Data-quality objectives were met by 75 percent of the samples analyzed for sodium and 58 percent of the samples analyzed for specific conductance.

  11. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    USGS Publications Warehouse

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  12. Tandem mass spectrometry data quality assessment by self-convolution.

    PubMed

    Choo, Keng Wah; Tham, Wai Mun

    2007-09-20

    Many algorithms have been developed for deciphering the tandem mass spectrometry (MS) data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current) component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the predicted results. We conclude that the algorithm performs well and could potentially be used as a pre-processing for all mass spectrometry based protein identification tools.

  13. Triangle area water supply monitoring project, October 1988 through September 2001, North Carolina -- description of the water-quality network, sampling and analysis methods, and quality-assurance practices

    USGS Publications Warehouse

    Oblinger, Carolyn J.

    2004-01-01

    The Triangle Area Water Supply Monitoring Project was initiated in October 1988 to provide long-term water-quality data for six area water-supply reservoirs and their tributaries. In addition, the project provides data that can be used to determine the effectiveness of large-scale changes in water-resource management practices, document differences in water quality among water-supply types (large multiuse reservoir, small reservoir, run-of-river), and tributary-loading and in-lake data for water-quality modeling of Falls and Jordan Lakes. By September 2001, the project had progressed in four phases and included as many as 34 sites (in 1991). Most sites were sampled and analyzed by the U.S. Geological Survey. Some sites were already a part of the North Carolina Division of Water Quality statewide ambient water-quality monitoring network and were sampled by the Division of Water Quality. The network has provided data on streamflow, physical properties, and concentrations of nutrients, major ions, metals, trace elements, chlorophyll, total organic carbon, suspended sediment, and selected synthetic organic compounds. Project quality-assurance activities include written procedures for sample collection, record management and archive, collection of field quality-control samples (blank samples and replicate samples), and monitoring the quality of field supplies. In addition to project quality-assurance activities, the quality of laboratory analyses was assessed through laboratory quality-assurance practices and an independent laboratory quality-control assessment provided by the U.S. Geological Survey Branch of Quality Systems through the Blind Inorganic Sample Project and the Organic Blind Sample Project.

  14. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Archiving and Quality Control

    NASA Astrophysics Data System (ADS)

    He, B.; Cui, C.; Fan, D.; Li, C.; Xiao, J.; Yu, C.; Wang, C.; Cao, Z.; Chen, J.; Yi, W.; Li, S.; Mi, L.; Yang, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences)1(Cui et al. 2014). To archive the astronomical data in China, we present the implementation of the astronomical data archiving system (ADAS). Data archiving and quality control are the infrastructure for the AstroCloud. Throughout the data of the entire life cycle, data archiving system standardized data, transferring data, logging observational data, archiving ambient data, And storing these data and metadata in database. Quality control covers the whole process and all aspects of data archiving.

  15. The Value of Reliable Data: Interactive Data Tools from the National Comprehensive Center for Teacher Quality. Policy-to-Practice Brief. Number 1

    ERIC Educational Resources Information Center

    National Comprehensive Center for Teacher Quality, 2008

    2008-01-01

    The National Comprehensive Center for Teacher Quality (TQ Center) designed the Interactive Data Tools to provide users with access to state and national data that can be helpful in assessing the qualifications of teachers in the states and the extent to which a state's teacher policy climate generally supports teacher quality. The Interactive Data…

  16. Data Quality Control of the French Permanent Broadband Network in the RESIF Framework.

    NASA Astrophysics Data System (ADS)

    Grunberg, M.; Lambotte, S.; Engels, F.

    2014-12-01

    In the framework of the RESIF (Réseau Sismologique et géodésique Français) project, a new information system is setting up, allowing the improvement of the management and the distribution of high quality data from the different elements of RESIF. Within this information system, EOST (in Strasbourg) is in charge of collecting real-time permanent broadband seismic waveform, and performing Quality Control on these data. The real-time and validated data set are pushed to the French National Distribution Center (Isterre/Grenoble) to make them publicly available. Furthermore EOST hosts the BCSF-ReNaSS, in charge of the French metropolitan seismic bulletin. This allows to benefit from some high-end quality control based on the national and world-wide seismicity. Here we present the real-time seismic data flow from the stations of the French National Broad Band Network to EOST, and then, the data Quality Control procedures that were recently installed, including some new developments.The data Quality Control consists in applying a variety of processes to check the consistency of the whole system from the stations to the data center. This allows us to verify that instruments and data transmission are operating correctly. Moreover, time quality is critical for most of the scientific data applications. To face this challenge and check the consistency of polarities and amplitudes, we deployed several high-end processes including a noise correlation procedure to check for timing accuracy (intrumental time errors result in a time-shift of the whole cross-correlation, clearly distinct from those due to change in medium physical properties), and a systematic comparison of synthetic and real data for teleseismic earthquakes of magnitude larger than 6.5 to detect timing errors as well as polarity and amplitude problems.

  17. Assessment of groundwater quality data for the Turtle Mountain Indian Reservation, Rolette County, North Dakota

    USGS Publications Warehouse

    Lundgren, Robert F.; Vining, Kevin C.

    2013-01-01

    The Turtle Mountain Indian Reservation relies on groundwater supplies to meet the demands of community and economic needs. The U.S. Geological Survey, in cooperation with the Turtle Mountain Band of Chippewa Indians, examined historical groundwater-level and groundwater-quality data for the Fox Hills, Hell Creek, Rolla, and Shell Valley aquifers. The two main sources of water-quality data for groundwater were the U.S. Geological Survey National Water Information System database and the North Dakota State Water Commission database. Data included major ions, trace elements, nutrients, field properties, and physical properties. The Fox Hills and Hell Creek aquifers had few groundwater water-quality data. The lack of data limits any detailed assessments that can be made about these aquifers. Data for the Rolla aquifer exist from 1978 through 1980 only. The concentrations of some water-quality constituents exceeded the U.S. Environmental Protection Agency secondary maximum contaminant levels. No samples were analyzed for pesticides and hydrocarbons. Numerous water-quality samples have been obtained from the Shell Valley aquifer. About one-half of the water samples from the Shell Valley aquifer had concentrations of iron, manganese, sulfate, and dissolved solids that exceeded the U.S. Environmental Protection Agency secondary maximum contaminant levels. Overall, the data did not indicate obvious patterns in concentrations.

  18. Quality assurance of data collection in the multi-site community randomized trial and prevalence survey of the children's healthy living program.

    PubMed

    Yamanaka, Ashley; Fialkowski, Marie Kainoa; Wilkens, Lynne; Li, Fenfang; Ettienne, Reynolette; Fleming, Travis; Power, Julianne; Deenik, Jonathan; Coleman, Patricia; Leon Guerrero, Rachael; Novotny, Rachel

    2016-09-02

    Quality assurance plays an important role in research by assuring data integrity, and thus, valid study results. We aim to describe and share the results of the quality assurance process used to guide the data collection process in a multi-site childhood obesity prevalence study and intervention trial across the US Affiliated Pacific Region. Quality assurance assessments following a standardized protocol were conducted by one assessor in every participating site. Results were summarized to examine and align the implementation of protocol procedures across diverse settings. Data collection protocols focused on food and physical activity were adhered to closely; however, protocols for handling completed forms and ensuring data security showed more variability. Quality assurance protocols are common in the clinical literature but are limited in multi-site community-based studies, especially in underserved populations. The reduction in the number of QA problems found in the second as compared to the first data collection periods for the intervention study attest to the value of this assessment. This paper can serve as a reference for similar studies wishing to implement quality assurance protocols of the data collection process to preserve data integrity and enhance the validity of study findings. NIH clinical trial #NCT01881373.

  19. The Improvement of Spatial-Temporal PM2.5 Resolution in Taiwan by Using Data Assimilation Method

    NASA Astrophysics Data System (ADS)

    Lin, Yong-Qing; Lin, Yuan-Chien

    2017-04-01

    Forecasting air pollution concentration, e.g., the concentration of PM2.5, is of great significance to protect human health and the environment. Accurate prediction of PM2.5 concentrations is limited in number and the data quality of air quality monitoring stations. The spatial and temporal variations of PM2.5 concentrations are measured by 76 National Air Quality Monitoring Stations (built by the TW-EPA) in Taiwan. The National Air Quality Monitoring Stations are costly and scarce because of the highly precise instrument and their size. Therefore, many places still out of the range of National Air Quality Monitoring Stations. Recently, there are an enormous number of portable air quality sensors called "AirBox" developed jointly by the Taiwan government and a private company. By virtue of its price and portative, the AirBox can provide higher resolution of space-time PM2.5 measurement. However, the spatiotemporal distribution and data quality are different between AirBox and National Air Quality Monitoring Stations. To integrate the heterogeneous PM2.5 data, the data assimilation method should be performed before further analysis. In this study, we propose a data assimilation method based on Ensemble Kalman Filter (EnKF), which is a variant of classic Kalman Filter, can be used to combine additional heterogeneous data from different source while modeling to improve the estimation of spatial-temporal PM2.5 concentration. The assimilation procedure uses the advantages of the two kinds of heterogeneous data and merges them to produce the final estimation. The results have shown that by combining AirBox PM2.5 data as additional information in our model based EnKF can bring the better estimation of spatial-temporal PM2.5 concentration and improve the it's space-time resolution. Under the approach proposed in this study, higher spatial-temporal resoultion could provide a very useful information for a better spatial-temporal data analysis and further environmental management, such as air pollution source localization and micro-scale air pollution analysis. Keywords: PM2.5, Data Assimilation, Ensemble Kalman Filter, Air Quality

  20. Food Composition Tables in Southeast Asia: The Contribution of the SMILING Project.

    PubMed

    Hulshof, Paul; Doets, Esmee; Seyha, Sok; Bunthang, Touch; Vonglokham, Manithong; Kounnavong, Sengchanh; Famida, Umi; Muslimatun, Siti; Santika, Otte; Prihatini, Sri; Nazarudin, Nazarina; Jahari, Abas; Rojroongwasinkul, Nipa; Chittchang, Uraiporn; Mai, Le Bach; Dung, Le Hong; Lua, Tran Thi; Nowak, Verena; Elburg, Lucy; Melse-Boonstra, Alida; Brouwer, Inge

    2018-06-08

    Objectives Food composition data are key for many nutrition related activities in research, planning and policy. Combatting micronutrient malnutrition among women and young children using sustainable food based approaches, as aimed at in the SMILING project, requires high quality food composition data. Methods In order to develop capacity and to align procedures for establishing, updating and assessing the quality of key nutrient data in the food composition tables in Southeast Asia, a detailed roadmap was developed to identify and propose steps for this. This included a training workshop to build capacity in the field of food composition data, and alignment of procedures for selecting foods and nutrients to be included for quality assessment, and update of country specific food composition tables. The SEA partners in the SMILING project finalised a country specific food composition table (FCT) with updated compositional data on selected foods and nutrients considered key for designing nutrient dense and optimal diets for the target groups. Results Between 140 and 175 foods were selected for inclusion in the country specific FCTs. Key-nutrients were: energy, protein, total fat, carbohydrates, iron, zinc, (pro-)-vitamin A, folate, calcium, vitamin D, vitamin B1, vitamin B2, vitamin B3, vitamin B6, vitamin B12 and vitamin C. A detailed quality assessment on 13 key-foods per nutrient was performed using international guidelines. Nutrient data for specific local food items were often unavailable and data on folate, vitamin B12 and vitamin B6 contents were mostly missing. For many foods, documentation was not available, thereby complicating an in-depth quality assessment. Despite these limitations, the SMILING project offered a unique opportunity to increase awareness of the importance of high quality well documented food composition data. Conclusion for Practise The self-reported data quality demonstrated that there is considerable room for improvement of the nutrient data quality in some countries. In addition, investment in sustainable capacity development and an urgent need to produce and document high quality data on the micronutrient composition of especially local foods is required.

  1. Implementing a Data Quality Strategy to Simplify Access to Data

    NASA Astrophysics Data System (ADS)

    Druken, K. A.; Trenham, C. E.; Evans, B. J. K.; Richards, C. J.; Wang, J.; Wyborn, L. A.

    2016-12-01

    To ensure seamless programmatic access for data analysis (including machine learning), standardization of both data and services is vital. At the Australian National Computational Infrastructure (NCI) we have developed a Data Quality Strategy (DQS) that currently provides processes for: (1) the consistency of data structures in the underlying High Performance Data (HPD) platform; (2) quality control through compliance with recognized community standards; and (3) data quality assurance through demonstrated functionality across common platforms, tools and services. NCI hosts one of Australia's largest repositories (10+ PBytes) of research data collections spanning datasets from climate, coasts, oceans and geophysics through to astronomy, bioinformatics and the social sciences. A key challenge is the application of community-agreed data standards to the broad set of Earth systems and environmental data that are being used. Within these disciplines, data span a wide range of gridded, ungridded (i.e., line surveys, point clouds), and raster image types, as well as diverse coordinate reference projections and resolutions. By implementing our DQS we have seen progressive improvement in the quality of the datasets across the different subject domains, and through this, the ease by which the users can programmatically access the data, either in situ or via web services. As part of its quality control procedures, NCI has developed a compliance checker based upon existing domain standards. The DQS also includes extensive Functionality Testing which include readability by commonly used libraries (e.g., netCDF, HDF, GDAL, etc.); accessibility by data servers (e.g., THREDDS, Hyrax, GeoServer), validation against scientific analysis and programming platforms (e.g., Python, Matlab, QGIS); and visualization tools (e.g., ParaView, NASA Web World Wind). These tests ensure smooth interoperability between products and services as well as exposing unforeseen requirements and dependencies. The results provide an important component of quality control within the DQS as well as clarifying the requirement for any extensions to the relevant standards that help support the uptake of data by broader international communities.

  2. dBBQs: dataBase of Bacterial Quality scores.

    PubMed

    Wanchai, Visanu; Patumcharoenpol, Preecha; Nookaew, Intawat; Ussery, David

    2017-12-28

    It is well-known that genome sequencing technologies are becoming significantly cheaper and faster. As a result of this, the exponential growth in sequencing data in public databases allows us to explore ever growing large collections of genome sequences. However, it is less known that the majority of available sequenced genome sequences in public databases are not complete, drafts of varying qualities. We have calculated quality scores for around 100,000 bacterial genomes from all major genome repositories and put them in a fast and easy-to-use database. Prokaryotic genomic data from all sources were collected and combined to make a non-redundant set of bacterial genomes. The genome quality score for each was calculated by four different measurements: assembly quality, number of rRNA and tRNA genes, and the occurrence of conserved functional domains. The dataBase of Bacterial Quality scores (dBBQs) was designed to store and retrieve quality scores. It offers fast searching and download features which the result can be used for further analysis. In addition, the search results are shown in interactive JavaScript chart framework using DC.js. The analysis of quality scores across major public genome databases find that around 68% of the genomes are of acceptable quality for many uses. dBBQs (available at http://arc-gem.uams.edu/dbbqs ) provides genome quality scores for all available prokaryotic genome sequences with a user-friendly Web-interface. These scores can be used as cut-offs to get a high-quality set of genomes for testing bioinformatics tools or improving the analysis. Moreover, all data of the four measurements that were combined to make the quality score for each genome, which can potentially be used for further analysis. dBBQs will be updated regularly and is freely use for non-commercial purpose.

  3. Comparison of High and Low Density Airborne LIDAR Data for Forest Road Quality Assessment

    NASA Astrophysics Data System (ADS)

    Kiss, K.; Malinen, J.; Tokola, T.

    2016-06-01

    Good quality forest roads are important for forest management. Airborne laser scanning data can help create automatized road quality detection, thus avoiding field visits. Two different pulse density datasets have been used to assess road quality: high-density airborne laser scanning data from Kiihtelysvaara and low-density data from Tuusniemi, Finland. The field inventory mainly focused on the surface wear condition, structural condition, flatness, road side vegetation and drying of the road. Observations were divided into poor, satisfactory and good categories based on the current Finnish quality standards used for forest roads. Digital Elevation Models were derived from the laser point cloud, and indices were calculated to determine road quality. The calculated indices assessed the topographic differences on the road surface and road sides. The topographic position index works well in flat terrain only, while the standardized elevation index described the road surface better if the differences are bigger. Both indices require at least a 1 metre resolution. High-density data is necessary for analysis of the road surface, and the indices relate mostly to the surface wear and flatness. The classification was more precise (31-92%) than on low-density data (25-40%). However, ditch detection and classification can be carried out using the sparse dataset as well (with a success rate of 69%). The use of airborne laser scanning data can provide quality information on forest roads.

  4. The accurate assessment of small-angle X-ray scattering data

    DOE PAGES

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  5. Exploring consumer understanding and preferences for pharmacy quality information

    PubMed Central

    Shiyanbola, Olayinka O.; Mort, Jane R.

    2014-01-01

    Objective: To describe consumer understanding of pharmacy quality measures and consumer preferences for pharmacy quality information. Methods: Semi-structured focus group design was combined with survey methods. Adults who filled prescription medications for self-reported chronic illnesses at community pharmacies discussed their understanding of Pharmacy Quality Alliance approved quality measures. Questions examined preference of pharmacy quality information rating systems (e.g. stars versus percentages) and desired data display/formats. During the focus group, participants completed a survey examining their understanding of each pharmacy quality measure. All focus group discussions were transcribed verbatim. Data were analyzed using thematic analysis and descriptive statistics. Results: Thirty-four individuals participated (mean age= 62.85; SD=16.05). Participants were unfamiliar with quality measures information and their level of understanding differed for each quality measure. Surveys indicated 94.1% understood “Drug-Drug Interactions” and “Helping Patients Get Needed Medications” better than other measures (e.g., 76.5% understood “Suboptimal Treatment of Hypertension in Patients with Diabetes”). Qualitative analysis indicated participants preferred an overall pharmacy rating for quick access and use. However, participants also wanted quality measures information displayed by health conditions. Participants favored comparison of their pharmacy to city data instead of state data. Most participants liked star ratings better than percentages, letter grades, or numerical ratings. Conclusions: Individuals who have a chronic illness and regularly use community pharmacies are interested in pharmacy quality measures. However, specific quality measures were not understood by some participants. Participants had specific preferences for the display of pharmacy quality information which will be helpful in the design of appropriate quality report systems. PMID:25580169

  6. Quality of HIV Testing Data Before and After the Implementation of a National Data Quality Assessment and Feedback System.

    PubMed

    Beltrami, John; Wang, Guoshen; Usman, Hussain R; Lin, Lillian S

    In 2010, the Centers for Disease Control and Prevention (CDC) implemented a national data quality assessment and feedback system for CDC-funded HIV testing program data. Our objective was to analyze data quality before and after feedback. Coinciding with required quarterly data submissions to CDC, each health department received data quality feedback reports and a call with CDC to discuss the reports. Data from 2008 to 2011 were analyzed. Fifty-nine state and local health departments that were funded for comprehensive HIV prevention services. Data collected by a service provider in conjunction with a client receiving HIV testing. National data quality assessment and feedback system. Before and after intervention implementation, quality was assessed through the number of new test records reported and the percentage of data values that were neither missing nor invalid. Generalized estimating equations were used to assess the effect of feedback in improving the completeness of variables. Data were included from 44 health departments. The average number of new records per submission period increased from 197 907 before feedback implementation to 497 753 afterward. Completeness was high before and after feedback for race/ethnicity (99.3% vs 99.3%), current test results (99.1% vs 99.7%), prior testing and results (97.4% vs 97.7%), and receipt of results (91.4% vs 91.2%). Completeness improved for HIV risk (83.6% vs 89.5%), linkage to HIV care (56.0% vs 64.0%), referral to HIV partner services (58.9% vs 62.8%), and referral to HIV prevention services (55.3% vs 63.9%). Calls as part of feedback were associated with improved completeness for HIV risk (adjusted odds ratio [AOR] = 2.28; 95% confidence interval [CI], 1.75-2.96), linkage to HIV care (AOR = 1.60; 95% CI, 1.31-1.96), referral to HIV partner services (AOR = 1.73; 95% CI, 1.43-2.09), and referral to HIV prevention services (AOR = 1.74; 95% CI, 1.43-2.10). Feedback contributed to increased data quality. CDC and health departments should continue monitoring the data and implement measures to improve variables of low completeness.

  7. [Strategies and development of quality assurance and control in the ELSA-Brasil].

    PubMed

    Schmidt, Maria Inês; Griep, Rosane Härter; Passos, Valéria Maria; Luft, Vivian Cristine; Goulart, Alessandra Carvalho; Menezes, Greice Maria de Souza; Molina, Maria del Carmen Bisi; Vigo, Alvaro; Nunes, Maria Angélica

    2013-06-01

    The ELSA-Brasil (Estudo Longitudinal de Saúde do Adulto - Brazilian Longitudinal Study for Adult Health) is a cohort study composed of 15,105 adults followed up in order to assess the development of chronic diseases, especially diabetes and cardiovascular disease. Its size, multicenter nature and the diversity of measurements required effective and efficient mechanisms of quality assurance and control. The main quality assurance activities (those developed before data collection) were: careful selection of research instruments, centralized training and certification, pretesting and pilot studies, and preparation of operation manuals for the procedures. Quality control activities (developed during data collection and processing) were performed more intensively at the beginning, when routines had not been established yet. The main quality control activities were: periodic observation of technicians, test-retest studies, data monitoring, network of supervisors, and cross visits. Data that estimate the reliability of the obtained information attest that the quality goals have been achieved.

  8. Open and endovascular aneurysm repair in the Society for Vascular Surgery Vascular Quality Initiative.

    PubMed

    Spangler, Emily L; Beck, Adam W

    2017-12-01

    The Society for Vascular Surgery Vascular Quality Initiative is a patient safety organization and a collection of procedure-based registries that can be utilized for quality improvement initiatives and clinical outcomes research. The Vascular Quality Initiative consists of voluntary participation by centers to collect data prospectively on all consecutive cases within specific registries which physicians and centers elect to participate. The data capture extends from preoperative demographics and risk factors (including indications for operation), through the perioperative period, to outcomes data at up to 1-year of follow-up. Additionally, longer-term follow-up can be achieved by matching with Medicare claims data, providing long-term longitudinal follow-up for a majority of patients within the Vascular Quality Initiative registries. We present the unique characteristics of the Vascular Quality Initiative registries and highlight important insights gained specific to open and endovascular abdominal aortic aneurysm repair. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Water-quality data from a landfill-leachate treatment and disposal site, Pinellas County, Florida, January 1979-August 1980

    USGS Publications Warehouse

    Barr, G.L.; Fernandez, Mario

    1981-01-01

    Water-quality data collected between January 1979 and August 1980 at the landfill leachate treatment site in Pinellas County, Fla., are presented. Data include field and laboratory measurements of physical properties, major chemical constituents , nitrogen and phosphorus species, chemical oxygen demand, trace metals, coliform bacteria, taxonomy of macroinvertebrates and phytoplankton, and chlorophyll analyses. Data were collected as part of a study to determine water-quality changes resulting from aeration and ponding of leachate pumped from landfill burial trenches and for use in determining the rate of movement and quality changes as the leachate migrates through the surficial aquifer. Samples were collected from 81 surficial-aquifer water-quality monitoring wells constructed in January 1975, February 1979, and March 1979, and 8 surface-water quality monitoring sites established in January 1975, February 1978, and November 1978. (USGS)

  10. Aggregate R-R-V Analysis

    EPA Pesticide Factsheets

    The excel file contains time series data of flow rates, concentrations of alachlor , atrazine, ammonia, total phosphorus, and total suspended solids observed in two watersheds in Indiana from 2002 to 2007. The aggregate time series data corresponding or representative to all these parameters was obtained using a specialized, data-driven technique. The aggregate data is hypothesized in the published paper to represent the overall health of both watersheds with respect to various potential water quality impairments. The time series data for each of the individual water quality parameters were used to compute corresponding risk measures (Rel, Res, and Vul) that are reported in Table 4 and 5. The aggregation of the risk measures, which is computed from the aggregate time series and water quality standards in Table 1, is also reported in Table 4 and 5 of the published paper. Values under column heading uncertainty reports uncertainties associated with reconstruction of missing records of the water quality parameters. Long-term records of the water quality parameters were reconstructed in order to estimate the (R-R-V) and corresponding aggregate risk measures. This dataset is associated with the following publication:Hoque, Y., S. Tripathi, M. Hantush , and R. Govindaraju. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty. Ed Gregorich JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI,

  11. Data from selected U.S. Geological Survey national stream water-quality monitoring networks (WQN) on CD-ROM

    USGS Publications Warehouse

    Alexander, R.B.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.

    1996-01-01

    Data from two U.S. Geological Survey (USGS) national stream water-quality monitoring networks, the National Stream Quality Accounting Network (NASQAN) and the Hydrologic Benchmark Network (HBN), are now available in a two CD-ROM set. These data on CD-ROM are collectively referred to as WQN, water-quality networks. Data from these networks have been used at the national, regional, and local levels to estimate the rates of chemical flux from watersheds, quantify changes in stream water quality for periods during the past 30 years, and investigate relations between water quality and streamflow as well as the relations of water quality to pollution sources and various physical characteristics of watersheds. The networks include 679 monitoring stations in watersheds that represent diverse climatic, physiographic, and cultural characteristics. The HBN includes 63 stations in relatively small, minimally disturbed basins ranging in size from 2 to 2,000 square miles with a median drainage basin size of 57 square miles. NASQAN includes 618 stations in larger, more culturally-influenced drainage basins ranging in size from one square mile to 1.2 million square miles with a median drainage basin size of about 4,000 square miles. The CD-ROMs contain data for 63 physical, chemical, and biological properties of water (122 total constituents including analyses of dissolved and water suspended-sediment samples) collected during more than 60,000 site visits. These data approximately span the periods 1962-95 for HBN and 1973-95 for NASQAN. The data reflect sampling over a wide range of streamflow conditions and the use of relatively consistent sampling and analytical methods. The CD-ROMs provide ancillary information and data-retrieval tools to allow the national network data to be properly and efficiently used. Ancillary information includes the following: descriptions of the network objectives and history, characteristics of the network stations and water-quality data, historical records of important changes in network sample collection and laboratory analytical methods, water reference sample data for estimating laboratory measurement bias and variability for 34 dissolved constituents for the period 1985-95, discussions of statistical methods for using water reference sample data to evaluate the accuracy of network stream water-quality data, and a bibliography of scientific investigations using national network data and other publications relevant to the networks. The data structure of the CD-ROMs is designed to allow users to efficiently enter the water-quality data to user-supplied software packages including statistical analysis, modeling, or geographic information systems. On one disc, all data are stored in ASCII form accessible from any computer system with a CD-ROM driver. The data also can be accessed using DOS-based retrieval software supplied on a second disc. This software supports logical queries of the water-quality data based on constituent concentrations, sample- collection date, river name, station name, county, state, hydrologic unit number, and 1990 population and 1987 land-cover characteristics for station watersheds. User-selected data may be output in a variety of formats including dBASE, flat ASCII, delimited ASCII, or fixed-field for subsequent use in other software packages.

  12. Quantifying the foodscape: A systematic review and meta-analysis of the validity of commercially available business data.

    PubMed

    Lebel, Alexandre; Daepp, Madeleine I G; Block, Jason P; Walker, Renée; Lalonde, Benoît; Kestens, Yan; Subramanian, S V

    2017-01-01

    This paper reviews studies of the validity of commercially available business (CAB) data on food establishments ("the foodscape"), offering a meta-analysis of characteristics associated with CAB quality and a case study evaluating the performance of commonly-used validity indicators describing the foodscape. Existing validation studies report a broad range in CAB data quality, although most studies conclude that CAB quality is "moderate" to "substantial". We conclude that current studies may underestimate the quality of CAB data. We recommend that future validation studies use density-adjusted and exposure measures to offer a more meaningful characterization of the relationship of data error with spatial exposure.

  13. Quantifying the foodscape: A systematic review and meta-analysis of the validity of commercially available business data

    PubMed Central

    Lebel, Alexandre; Daepp, Madeleine I. G.; Block, Jason P.; Walker, Renée; Lalonde, Benoît; Kestens, Yan; Subramanian, S. V.

    2017-01-01

    This paper reviews studies of the validity of commercially available business (CAB) data on food establishments (“the foodscape”), offering a meta-analysis of characteristics associated with CAB quality and a case study evaluating the performance of commonly-used validity indicators describing the foodscape. Existing validation studies report a broad range in CAB data quality, although most studies conclude that CAB quality is “moderate” to “substantial”. We conclude that current studies may underestimate the quality of CAB data. We recommend that future validation studies use density-adjusted and exposure measures to offer a more meaningful characterization of the relationship of data error with spatial exposure. PMID:28358819

  14. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  15. Comprehensive Monitoring Program: Air Quality Data Assessment Report for FY90. Volume 2. Version 3.1

    DTIC Science & Technology

    1991-09-01

    91311R01 If VERSION 3.10) VOLUME II Comm 2ND COPY COMPREHENSIVE MONITORING PROGRAM Contract Number DAAAI5-87-0095 AIR QUALITY DATA ASSESSMENT REPORT...MONITORING PROGRAM. FINAL AIR QUALITY DATA ASSESSMENT REPORT FOR FY90, VERSION 3.1 NONE 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRES.S(S) 8...RELEASE; DISTRIBUTION IS UNLIMITED 13. ABSTRACT (Maximum 200 words) THE OBJECTIVE OF THIS CMP IS TO: VERIFY AND EVALUATE POTENTIAL AIR QUALITY HEALTH

  16. A Summary of OMI NO2 Data for Air Quality Applications

    NASA Technical Reports Server (NTRS)

    Duncan, Bryan N.; Lamsal, Lok N.; Yoshida, Yasuko; Thompson, Anne M.

    2016-01-01

    As a member of NASA's Air Quality Applied Sciences Team (AQAST), I will update air quality managers on the status of various NASA satellite datasets that are relevant for air quality applications. I will also present a new website that contains NASA Aura OMI nitrogen dioxide data and shows US city trends and comparisons to EPA surface monitor data. Since this is the final AQAST meeting, I will summarize my contributions to AQAST over the last five years.

  17. Argo workstation: a key component of operational oceanography

    NASA Astrophysics Data System (ADS)

    Dong, Mingmei; Xu, Shanshan; Miao, Qingsheng; Yue, Xinyang; Lu, Jiawei; Yang, Yang

    2018-02-01

    Operational oceanography requires the quantity, quality, and availability of data set and the timeliness and effectiveness of data products. Without steady and strong operational system supporting, operational oceanography will never be proceeded far. In this paper we describe an integrated platform named Argo Workstation. It operates as a data processing and management system, capable of data collection, automatic data quality control, visualized data check, statistical data search and data service. After it is set up, Argo workstation provides global high quality Argo data to users every day timely and effectively. It has not only played a key role in operational oceanography but also set up an example for operational system.

  18. 78 FR 15023 - Office of Health Assessment and Translation Webinar on the Assessment of Data Quality in Animal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-08

    ... and Translation Webinar on the Assessment of Data Quality in Animal Studies; Notice of Public Webinar...- based meeting on the assessment of data quality in animal studies. The Office of Health Assessment and... meetings with a focus on methodological issues related to OHAT implementing systematic review. The first...

  19. Alternative Fuels Data Center: Natural Gas Street Sweepers Improve Air

    Science.gov Websites

    Quality in New York Natural Gas Street Sweepers Improve Air Quality in New York to someone by E -mail Share Alternative Fuels Data Center: Natural Gas Street Sweepers Improve Air Quality in New York on Facebook Tweet about Alternative Fuels Data Center: Natural Gas Street Sweepers Improve Air

  20. Quantity is nothing without quality: automated QA/QC for streaming sensor networks

    Treesearch

    John L. Campbell; Lindsey E. Rustad; John H. Porter; Jeffrey R. Taylor; Ethan W. Dereszynski; James B. Shanley; Corinna Gries; Donald L. Henshaw; Mary E. Martin; Wade. M. Sheldon; Emery R. Boose

    2013-01-01

    Sensor networks are revolutionizing environmental monitoring by producing massive quantities of data that are being made publically available in near real time. These data streams pose a challenge for ecologists because traditional approaches to quality assurance and quality control are no longer practical when confronted with the size of these data sets and the...

  1. 77 FR 3417 - Approval and Promulgation of Air Quality Implementation Plans; Massachusetts; Determination of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-24

    ... recorded in EPA's Air Quality System (AQS) database. To account for missing data, the procedures found in... three-year period and then adjusts for missing data. In short, if the three-year average expected... ambient air quality monitoring data for the 2001-2003 monitoring period showing that the area had an...

  2. 76 FR 57845 - Approval of Air Quality Implementation Plans; California; San Joaquin Valley; Attainment Plan for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-16

    ... evaluate monitored air quality data and is used to determine whether an area's air quality meets a NAAQS... documentation in their submittals explaining how the emissions data were calculated. 70 FR at 71664 and EI... inventories for the SJV. See Table 1 below. These revised inventories incorporate improved activity data and...

  3. Census data quality--a user's view.

    PubMed

    Hawkes, W J

    1986-01-01

    "This paper presents the perspective of a major user of both decennial and economic [U.S.] census data. It illustrates how these data are used as a framework for commercial marketing research surveys that measure television audiences and sales of consumer goods through retail stores, drawing on Nielsen's own experience in data collection and evaluation. It reviews Nielsen's analyses of census data quality based, in part, on actual field evaluation of census results. Finally, it suggests ways that data quality might be evaluated and improved to enhance the usefulness of these census programs." excerpt

  4. Square2 - A Web Application for Data Monitoring in Epidemiological and Clinical Studies

    PubMed

    Schmidt, Carsten Oliver; Krabbe, Christine; Schössow, Janka; Albers, Martin; Radke, Dörte; Henke, Jörg

    2017-01-01

    Valid scientific inferences from epidemiological and clinical studies require high data quality. Data generating departments therefore aim to detect data irregularities as early as possible in order to guide quality management processes. In addition, after the completion of data collections the obtained data quality must be evaluated. This can be challenging in complex studies due to a wide scope of examinations, numerous study variables, multiple examiners, devices, and examination centers. This paper describes a Java EE web application used to monitor and evaluate data quality in institutions with complex and multiple studies, named Square 2 . It uses the Java libraries Apache MyFaces 2, extended by BootsFaces for layout and style. RServe and REngine manage calls to R server processes. All study data and metadata are stored in PostgreSQL. R is the statistics backend and LaTeX is used for the generation of print ready PDF reports. A GUI manages the entire workflow. Square 2 covers all steps in the data monitoring workflow, including the setup of studies and their structure, the handling of metadata for data monitoring purposes, selection of variables, upload of data, statistical analyses, and the generation as well as inspection of quality reports. To take into account data protection issues, Square 2 comprises an extensive user rights and roles concept.

  5. Summary of selected U.S. Geological survey data on domestic well water quality for the Centers for Disease Control's National Environmental Public Health Tracking Program

    USGS Publications Warehouse

    Bartholomay, Roy C.; Carter, Janet M.; Qi, Sharon L.; Squillace, Paul J.; Rowe, Gary L.

    2007-01-01

    About 10 to 30 percent of the population in most States uses domestic (private) water supply. In many States, the total number of people served by domestic supplies can be in the millions. The water quality of domestic supplies is inconsistently regulated and generally not well characterized. The U.S. Geological Survey (USGS) has two water-quality data sets in the National Water Information System (NWIS) database that can be used to help define the water quality of domestic-water supplies: (1) data from the National Water-Quality Assessment (NAWQA) Program, and (2) USGS State data. Data from domestic wells from the NAWQA Program were collected to meet one of the Program's objectives, which was to define the water quality of major aquifers in the United States. These domestic wells were located primarily in rural areas. Water-quality conditions in these major aquifers as defined by the NAWQA data can be compared because of the consistency of the NAWQA sampling design, sampling protocols, and water-quality analyses. The NWIS database is a repository of USGS water data collected for a variety of projects; consequently, project objectives and analytical methods vary. This variability can bias statistical summaries of contaminant occurrence and concentrations; nevertheless, these data can be used to define the geographic distribution of contaminants. Maps created using NAWQA and USGS State data in NWIS can show geographic areas where contaminant concentrations may be of potential human-health concern by showing concentrations relative to human-health water-quality benchmarks. On the basis of national summaries of detection frequencies and concentrations relative to U.S. Environmental Protection Agency (USEPA) human-health benchmarks for trace elements, pesticides, and volatile organic compounds, 28 water-quality constituents were identified as contaminants of potential human-health concern. From this list, 11 contaminants were selected for summarization of water-quality data in 16 States (grantee States) that were funded by the Environmental Public Health Tracking (EPHT) Program of the Centers for Disease Control and Prevention (CDC). Only data from domestic-water supplies were used in this summary because samples from these wells are most relevant to human exposure for the targeted population. Using NAWQA data, the concentrations of the 11 contaminants were compared to USEPA human-health benchmarks. Using NAWQA and USGS State data in NWIS, the geographic distribution of the contaminants were mapped for the 16 grantee States. Radon, arsenic, manganese, nitrate, strontium, and uranium had the largest percentages of samples with concentrations greater than their human-health benchmarks. In contrast, organic compounds (pesticides and volatile organic compounds) had the lowest percentages of samples with concentrations greater than human-health benchmarks. Results of data retrievals and spatial analysis were compiled for each of the 16 States and are presented in State summaries for each State. Example summary tables, graphs, and maps based on USGS data for New Jersey are presented to illustrate how USGS water-quality and associated ancillary geospatial data can be used by the CDC to address goals and objectives of the EPHT Program.

  6. Preparing Nursing Home Data from Multiple Sites for Clinical Research – A Case Study Using Observational Health Data Sciences and Informatics

    PubMed Central

    Boyce, Richard D.; Handler, Steven M.; Karp, Jordan F.; Perera, Subashan; Reynolds, Charles F.

    2016-01-01

    Introduction: A potential barrier to nursing home research is the limited availability of research quality data in electronic form. We describe a case study of converting electronic health data from five skilled nursing facilities to a research quality longitudinal dataset by means of open-source tools produced by the Observational Health Data Sciences and Informatics (OHDSI) collaborative. Methods: The Long-Term Care Minimum Data Set (MDS), drug dispensing, and fall incident data from five SNFs were extracted, translated, and loaded into version 4 of the OHDSI common data model. Quality assurance involved identifying errors using the Achilles data characterization tool and comparing both quality measures and drug exposures in the new database for concordance with externally available sources. Findings: Records for a total 4,519 patients (95.1%) made it into the final database. Achilles identified 10 different types of errors that were addressed in the final dataset. Drug exposures based on dispensing were generally accurate when compared with medication administration data from the pharmacy services provider. Quality measures were generally concordant between the new database and Nursing Home Compare for measures with a prevalence ≥ 10%. Fall data recorded in MDS was found to be more complete than data from fall incident reports. Conclusions: The new dataset is ready to support observational research on topics of clinical importance in the nursing home including patient-level prediction of falls. The extraction, translation, and loading process enabled the use of OHDSI data characterization tools that improved the quality of the final dataset. PMID:27891528

  7. Precipitation data in a mountainous catchment in Honduras: quality assessment and spatiotemporal characteristics

    NASA Astrophysics Data System (ADS)

    Westerberg, I.; Walther, A.; Guerrero, J.-L.; Coello, Z.; Halldin, S.; Xu, C.-Y.; Chen, D.; Lundin, L.-C.

    2010-08-01

    An accurate description of temporal and spatial precipitation variability in Central America is important for local farming, water supply and flood management. Data quality problems and lack of consistent precipitation data impede hydrometeorological analysis in the 7,500 km2 Choluteca River basin in central Honduras, encompassing the capital Tegucigalpa. We used precipitation data from 60 daily and 13 monthly stations in 1913-2006 from five local authorities and NOAA's Global Historical Climatology Network. Quality control routines were developed to tackle the specific data quality problems. The quality-controlled data were characterised spatially and temporally, and compared with regional and larger-scale studies. Two gap-filling methods for daily data and three interpolation methods for monthly and mean annual precipitation were compared. The coefficient-of-correlation-weighting method provided the best results for gap-filling and the universal kriging method for spatial interpolation. In-homogeneity in the time series was the main quality problem, and 22% of the daily precipitation data were too poor to be used. Spatial autocorrelation for monthly precipitation was low during the dry season, and correlation increased markedly when data were temporally aggregated from a daily time scale to 4-5 days. The analysis manifested the high spatial and temporal variability caused by the diverse precipitation-generating mechanisms and the need for an improved monitoring network.

  8. Quality of Primary Education Inputs in Urban Schools: Evidence from Nairobi

    ERIC Educational Resources Information Center

    Ngware, Moses W.; Oketch, Moses; Ezeh, Alex C.

    2011-01-01

    This article examines the quality of primary school inputs in urban settlements with a view to understand how it sheds light on benchmarks of education quality indicators in Kenya. Data from a school survey that involved 83 primary schools collected in 2005 were used. The data set contains information on school quality characteristics of various…

  9. Quality of Care for Myocardial Infarction in Rural and Urban Hospitals

    ERIC Educational Resources Information Center

    Baldwin, Laura-Mae; Chan, Leighton; Andrilla, C. Holly A.; Huff, Edwin D.; Hart, L. Gary

    2010-01-01

    Background: In the mid-1990s, significant gaps existed in the quality of acute myocardial infarction (AMI) care between rural and urban hospitals. Since then, overall AMI care quality has improved. This study uses more recent data to determine whether rural-urban AMI quality gaps have persisted. Methods: Using inpatient records data for 34,776…

  10. Sound data management as a foundation for natural resources management and science

    USGS Publications Warehouse

    Burley, Thomas E.

    2012-01-01

    Effective decision making is closely related to the quality and completeness of available data and information. Data management helps to ensure data quality in any discipline and supports decision making. Managing data as a long-term scientific asset helps to ensure that data will be usable beyond the original intended application. Emerging issues in water-resources management and climate variability require the ability to analyze change in the conditions of natural resources over time. The availability of quality, well-managed, and documented data from the past and present helps support this requirement.

  11. Maintaining High Quality Network Performance at the GSN: Sensor Installation Methods, New VBB Borehole Sensors and Data Quality Assessment from MUSTANG

    NASA Astrophysics Data System (ADS)

    Hafner, Katrin

    2017-04-01

    The goal of the Global Seismographic Network (GSN) is to provide the highest possible data quality and dynamic recording range in support of scientific needs. Considerable effort is made at each GSN seismic station site to achieve the lowest noise performance possible under local conditions. We continue to strive for higher data quality with a combination of new sensors and improved installation techniques. Most seismometers are installed either in 100 m deep steel-cased boreholes or in vaults tunneled underground. A few vaults are built at the surface or on the foundation of a building. All vault installations have a concrete pier, mechanically isolated from the floor, upon which the seismometers are placed. Many sites are now nearly 30 years old, and the GSN is investing in civil works at several stations to keep them in good condition or make critical repairs. Using GSN data from inception to the present, we will present analyses that demonstrate how successful these sensor installation strategies have been and describe ongoing experiments at GSN testing facilities to evaluate the best, most cost effective strategy to modernize existing GSN facilities. To improve sensor performance at some vault sites, we will employ new sensor installation strategies. Years of experience operating the GSN and the USArray Transportable Array, along with focused testing of emplacement strategies, show that the vulnerability of a sensor's horizontal components to tilt can be mitigated if the sensor package is buried at even shallow depth. At selected vault installations, shallow boreholes will be drilled to accommodate recently developed borehole VBB sensor models. The incremental cost of modern VBB instruments over standard BB models is small, and we expect to be able to preserve the GSN's crucial very broad bandwidth while improving noise performance and reliability using this strategy. A crucial link in making GSN station data available to the scientific community is the IRIS Data Management Center, which not only maintains the data archive, but also provides easy, rapid, and open access to data recorded from seconds to decades ago. All data flow to the IRIS DMC through the UCSD or ASL Data Collection Centers (DCCs). The DCCs focus on delivering data to the DMC, maintaining correct metadata for GSN stations, reviewing data quality from the stations that ASL and UCSD operate, and addressing circumstances that require special data handling, such as back filling following telemetry outages. Key to the high quality of the GSN data is the direct feedback on data quality problems identified by the DCC analysts to the network operations staff and field engineers. Aging of GSN equipment and station infrastructure has resulted in renewed emphasis on using data quality control tools such as MUSTANG. These tools allow the network operators to routinely monitor and analyze waveform data to detect and track problems and develop short and longer term action plans for improving network data quality. We will present summary data quality metrics for the GSN as obtained via these quality assurance tools.

  12. Aerial photography flight quality assessment with GPS/INS and DEM data

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Shang, Jiali; Liu, Jiangui; Li, Dong; Chen, Yanyan; Zuo, Zhengli; Chen, Zhengchao

    2018-01-01

    The flight altitude, ground coverage, photo overlap, and other acquisition specifications of an aerial photography flight mission directly affect the quality and accuracy of the subsequent mapping tasks. To ensure smooth post-flight data processing and fulfill the pre-defined mapping accuracy, flight quality assessments should be carried out in time. This paper presents a novel and rigorous approach for flight quality evaluation of frame cameras with GPS/INS data and DEM, using geometric calculation rather than image analysis as in the conventional methods. This new approach is based mainly on the collinearity equations, in which the accuracy of a set of flight quality indicators is derived through a rigorous error propagation model and validated with scenario data. Theoretical analysis and practical flight test of an aerial photography mission using an UltraCamXp camera showed that the calculated photo overlap is accurate enough for flight quality assessment of 5 cm ground sample distance image, using the SRTMGL3 DEM and the POSAV510 GPS/INS data. An even better overlap accuracy could be achieved for coarser-resolution aerial photography. With this new approach, the flight quality evaluation can be conducted on site right after landing, providing accurate and timely information for decision making.

  13. Systematic review of studies of staffing and quality in nursing homes.

    PubMed

    Bostick, Jane E; Rantz, Marilyn J; Flesner, Marcia K; Riggs, C Jo

    2006-07-01

    To evaluate a range of staffing measures and data sources for long-term use in public reporting of staffing as a quality measure in nursing homes. Eighty-seven research articles and government documents published from 1975 to 2003 were reviewed and summarized. Relevant content was extracted and organized around 3 themes: staffing measures, quality measures, and risk adjustment variables. Data sources for staffing information were also identified. There is a proven association between higher total staffing levels (especially licensed staff) and improved quality of care. Studies also indicate a significant relationship between high turnover and poor resident outcomes. Functional ability, pressure ulcers, and weight loss are the most sensitive quality indicators linked to staffing. The best national data sources for staffing and quality include the Minimum Data Set (MDS) and On-line Survey and Certification Automated Records (OSCAR). However, the accuracy of this self-reported information requires further reliability and validity testing. A nationwide instrument needs to be developed to accurately measure staff turnover. Large-scale studies using payroll data to measure staff retention and its impact on resident outcomes are recommended. Future research should use the most nurse-sensitive quality indicators such as pressure ulcers, functional status, and weight loss.

  14. Providing Data Quality Information for Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Albrecht, F.; Blaschke, T.; Lang, S.; Abdulmutalib, H. M.; Szabó, G.; Barsi, Á.; Batini, C.; Bartsch, A.; Kugler, Zs.; Tiede, D.; Huang, G.

    2018-04-01

    The availability and accessibility of remote sensing (RS) data, cloud processing platforms and provided information products and services has increased the size and diversity of the RS user community. This development also generates a need for validation approaches to assess data quality. Validation approaches employ quality criteria in their assessment. Data Quality (DQ) dimensions as the basis for quality criteria have been deeply investigated in the database area and in the remote sensing domain. Several standards exist within the RS domain but a general classification - established for databases - has been adapted only recently. For an easier identification of research opportunities, a better understanding is required how quality criteria are employed in the RS lifecycle. Therefore, this research investigates how quality criteria support decisions that guide the RS lifecycle and how they relate to the measured DQ dimensions. Subsequently follows an overview of the relevant standards in the RS domain that is matched to the RS lifecycle. Conclusively, the required research needs are identified that would enable a complete understanding of the interrelationships between the RS lifecycle, the data sources and the DQ dimensions, an understanding that would be very valuable for designing validation approaches in RS.

  15. Water Resources Data North Dakota Water Year 2002 Volume 1. Surface Water

    USGS Publications Warehouse

    Harkness, R.E.; Lundgren, R.F.; Norbeck, S.W.; Robinson, S.M.; Sether, B.A.

    2003-01-01

    Water-resources data for the 2002 water year for North Dakota consists of records of discharge, stage, and water quality for streams; contents, stage, and water quality for lakes and reservoirs; and water levels and water quality for ground-water wells. Volume 1 contains records of water discharge for 106 streamflow-gaging stations; stage only for 22 river-stage stations; contents and/or stage for 14 lake or reservoir stations; annual maximum discharge for 35 crest-stage stations; and water-quality for 96 streamflow-gaging stations, 3 river-stage stations, 11 lake or reservoir stations, 8 miscellaneous sample sites on rivers, and 63 miscellaneous sample sites on lakes and wetlands. Data are included for 7 water-quality monitor sites on streams and 2 precipitation-chemistry stations. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating Federal, State, and local agencies in North Dakota.

  16. Water Resources Data North Dakota Water Year 2003, Volume 1. Surface Water

    USGS Publications Warehouse

    Robinson, S.M.; Lundgren, R.F.; Sether, B.A.; Norbeck, S.W.; Lambrecht, J.M.

    2004-01-01

    Water-resources data for the 2003 water year for North Dakota consists of records of discharge, stage, and water quality for streams; contents, stage, and water quality for lakes and reservoirs; and water levels and water quality for ground-water wells. Volume 1 contains records of water discharge for 108 streamflow-gaging stations; stage only for 24 river-stage stations; contents and/or stage for 14 lake or reservoir stations; annual maximum discharge for 32 crest-stage stations; and water-quality for 99 streamflow-gaging stations, 5 river-stage stations, 11 lake or reservoir stations, 8 miscellaneous sample sites on rivers, and 63 miscellaneous sample sites on lakes and wetlands. Data are included for 7 water-quality monitor sites on streams and 2 precipitation-chemistry stations. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating Federal, State, and local agencies in North Dakota.

  17. Graphical user interface for accessing water-quality data for the Devils Lake basin, North Dakota

    USGS Publications Warehouse

    Ryberg, Karen R.; Damschen, William C.; Vecchia, Aldo V.

    2005-01-01

    Maintaining the quality of surface waters in the Devils Lake Basin in North Dakota is important for protecting the agricultural resources, fisheries, waterfowl and wildlife habitat, and recreational value of the basin. The U.S. Geological Survey, in cooperation with local, State, and Federal agencies, has collected and analyzed water-quality samples from streams and lakes in the basin since 1957, and the North Dakota Department of Health has collected and analyzed water-quality samples from lakes in the basin since 2001. Because water-quality data for the basin are important for numerous reasons, a graphical user interface was developed to access, view, and download the historical data for the basin. The interface is a web-based application that is available to the public and includes data through water year 2003. The interface will be updated periodically to include data for subsequent years.

  18. Water resources data--North Dakota water year 2005, Volume 1. Surface water

    USGS Publications Warehouse

    Robinson, S.M.; Lundgren, R.F.; Sether, B.A.; Norbeck, S.W.; Lambrecht, J.M.

    2006-01-01

    Water-resources data for the 2005 water year for North Dakota consists of records of discharge, stage, and water quality for streams; contents, stage, and water quality for lakes and reservoirs; and water levels and water quality for ground-water wells. Volume 1 contains records of water discharge for 107 streamflow-gaging stations; stage only for 22 river-stage stations; contents and/or stage for 13 lake or reservoir stations; annual maximum discharge for 31 crest-stage stations; and water quality for 93 streamflow-gaging stations, 6 river-stage stations, 15 lake or reservoir stations, and about 50 miscellaneous sample sites on lakes and wetlands. Data are included for 8 water-quality monitor sites on streams and 2 precipitation-chemistry stations. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating Federal, State, and local agencies in North Dakota.

  19. Water Resources Data North Dakota Water Year 2001, Volume 1. Surface Water

    USGS Publications Warehouse

    Harkness, R.E.; Berkas, W.R.; Norbeck, S.W.; Robinson, S.M.

    2002-01-01

    Water-resources data for the 2001 water year for North Dakota consists of records of discharge, stage, and water quality for streams; contents, stage, and water quality for lakes and reservoirs; and water levels and water quality for ground-water wells. Volume 1 contains records of water discharge for 103 streamflow-gaging stations; stage only for 20 river-stage stations; contents and/or stage for 13 lake or reservoir stations; annual maximum discharge for 35 crest-stage stations; and water-quality for 94 streamflow-gaging stations, 2 river-stage stations, 9 lake or reservoir stations, 7 miscellaneous sample sites on rivers, and 58 miscellaneous sample sites on lakes and wetlands. Data are included for 9 water-quality monitor sites on streams and 2 precipitation-chemistry stations. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating Federal, State, and local agencies in North Dakota.

  20. Urbanization in Pearl River Delta area in past 20 years: remote sensing of impact on water quality

    NASA Astrophysics Data System (ADS)

    Wang, Yunpeng; Fan, Fenglei; Zhang, Jinqu; Xia, Hao; Ye, Chun

    2004-11-01

    The Pearl River Delta of Guangdong province in China is one of the world"s largest growths in urbanization for the past 20 years. The objective of this research is to explore the relationship between urbanization and water quality in this area. Present and past remote sensing data including MSS< TM/ETM and ASTER are used to research the urbanization and its impact on water quality. Land use and water quality information are extracted from remote sensing data. Data of population, industrial and agricultural productivity indices are integrated with the thematic maps derived from remote sensing data by GIS method. Spatial analysis methods are applied on these data and the results indicate that population, waste water both from household and industrial and chemical fertilizer consumptions are main controls of the regional water quality and environment.

Top