Sample records for quality metric based

  1. Quality measurement for rhinosinusitis: a review from the Quality Improvement Committee of the American Rhinologic Society.

    PubMed

    Rudmik, Luke; Mattos, Jose; Schneider, John; Manes, Peter R; Stokken, Janalee K; Lee, Jivianne; Higgins, Thomas S; Schlosser, Rodney J; Reh, Douglas D; Setzen, Michael; Soler, Zachary M

    2017-09-01

    Measuring quality outcomes is an important prerequisite to improve quality of care. Rhinosinusitis represents a high value target to improve quality of care because it has a high prevalence of disease, large economic burden, and large practice variation. In this study we review the current state of quality measurement for management of both acute (ARS) and chronic rhinosinusitis (CRS). The major national quality metric repositories and clearinghouses were queried. Additional searches included the American Academy of Otolaryngology-Head and Neck Surgery database, PubMed, and Google to attempt to capture any additional quality metrics. Seven quality metrics for ARS and 4 quality metrics for CRS were identified. ARS metrics focused on appropriateness of diagnosis (n = 1), antibiotic prescribing (n = 4), and radiologic imaging (n = 2). CRS quality metrics focused on appropriateness of diagnosis (n = 1), radiologic imaging (n = 1), and measurement of patient quality of life (n = 2). The Physician Quality Reporting System (PQRS) currently tracks 3 ARS quality metrics and 1 CRS quality metric. There are no outcome-based rhinosinusitis quality metrics and no metrics that assess domains of safety, patient-centeredness, and timeliness of care. The current status of quality measurement for rhinosinusitis has focused primarily on the quality domain of efficiency and process measures for ARS. More work is needed to develop, validate, and track outcome-based quality metrics along with CRS-specific metrics. Although there has been excellent work done to improve quality measurement for rhinosinusitis, there remain major gaps and challenges that need to be considered during the development of future metrics. © 2017 ARS-AAOA, LLC.

  2. A guide to calculating habitat-quality metrics to inform conservation of highly mobile species

    USGS Publications Warehouse

    Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.

    2018-01-01

    Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic processes and population dynamics, and require the most data.More real-world examples applying occupancy-based, agent-based, and continuous-based metrics to seasonally migratory species are needed to better understand challenges and opportunities for applying these metrics more broadly.

  3. A no-reference video quality assessment metric based on ROI

    NASA Astrophysics Data System (ADS)

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan

    2015-01-01

    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  4. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  5. Quality Measurements in Radiology: A Systematic Review of the Literature and Survey of Radiology Benefit Management Groups.

    PubMed

    Narayan, Anand; Cinelli, Christina; Carrino, John A; Nagy, Paul; Coresh, Josef; Riese, Victoria G; Durand, Daniel J

    2015-11-01

    As the US health care system transitions toward value-based reimbursement, there is an increasing need for metrics to quantify health care quality. Within radiology, many quality metrics are in use, and still more have been proposed, but there have been limited attempts to systematically inventory these measures and classify them using a standard framework. The purpose of this study was to develop an exhaustive inventory of public and private sector imaging quality metrics classified according to the classic Donabedian framework (structure, process, and outcome). A systematic review was performed in which eligibility criteria included published articles (from 2000 onward) from multiple databases. Studies were double-read, with discrepancies resolved by consensus. For the radiology benefit management group (RBM) survey, the six known companies nationally were surveyed. Outcome measures were organized on the basis of standard categories (structure, process, and outcome) and reported using Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search strategy yielded 1,816 citations; review yielded 110 reports (29 included for final analysis). Three of six RBMs (50%) responded to the survey; the websites of the other RBMs were searched for additional metrics. Seventy-five unique metrics were reported: 35 structure (46%), 20 outcome (27%), and 20 process (27%) metrics. For RBMs, 35 metrics were reported: 27 structure (77%), 4 process (11%), and 4 outcome (11%) metrics. The most commonly cited structure, process, and outcome metrics included ACR accreditation (37%), ACR Appropriateness Criteria (85%), and peer review (95%), respectively. Imaging quality metrics are more likely to be structural (46%) than process (27%) or outcome (27%) based (P < .05). As national value-based reimbursement programs increasingly emphasize outcome-based metrics, radiologists must keep pace by developing the data infrastructure required to collect outcome-based quality metrics. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  6. Quality evaluation of extracted ion chromatograms and chromatographic peaks in liquid chromatography/mass spectrometry-based metabolomics data

    PubMed Central

    2014-01-01

    Background Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. Results We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Conclusions Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation. PMID:25350128

  7. Quality evaluation of extracted ion chromatograms and chromatographic peaks in liquid chromatography/mass spectrometry-based metabolomics data.

    PubMed

    Zhang, Wenchao; Zhao, Patrick X

    2014-01-01

    Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation.

  8. The data quality analyzer: A quality control program for seismic data

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.

    2015-03-01

    The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.

  9. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  10. The data quality analyzer: a quality control program for seismic data

    USGS Publications Warehouse

    Ringler, Adam; Hagerty, M.T.; Holland, James F.; Gonzales, A.; Gee, Lind S.; Edwards, J.D.; Wilson, David; Baker, Adam

    2015-01-01

    The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a “grade” for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.

  11. Quality of service routing in the differentiated services framework

    NASA Astrophysics Data System (ADS)

    Oliveira, Marilia C.; Melo, Bruno; Quadros, Goncalo; Monteiro, Edmundo

    2001-02-01

    In this paper we present a quality of service routing strategy for network where traffic differentiation follows the class-based paradigm, as in the Differentiated Services framework. This routing strategy is based on a metric of quality of service. This metric represents the impact that delay and losses verified at each router in the network have in application performance. Based on this metric, it is selected a path for each class according to the class sensitivity to delay and losses. The distribution of the metric is triggered by a relative criterion with two thresholds, and the values advertised are the moving average of the last values measured.

  12. A condition metric for Eucalyptus woodland derived from expert evaluations.

    PubMed

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  13. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  14. A management-oriented framework for selecting metrics used to assess habitat- and path-specific quality in spatially structured populations

    USGS Publications Warehouse

    Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan

    2016-01-01

    Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.

  15. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  16. A no-reference image and video visual quality metric based on machine learning

    NASA Astrophysics Data System (ADS)

    Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy

    2018-04-01

    The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

  17. An objective method for a video quality evaluation in a 3DTV service

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2015-09-01

    The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.

  18. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  19. A software quality model and metrics for risk assessment

    NASA Technical Reports Server (NTRS)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  20. Image quality metrics for volumetric laser displays

    NASA Astrophysics Data System (ADS)

    Williams, Rodney D.; Donohoo, Daniel

    1991-08-01

    This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.

  1. Automated Assessment of Visual Quality of Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  2. Quality assessment for color reproduction using a blind metric

    NASA Astrophysics Data System (ADS)

    Bringier, B.; Quintard, L.; Larabi, M.-C.

    2007-01-01

    This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.

  3. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  4. Quality metrics in high-dimensional data visualization: an overview and systematization.

    PubMed

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE

  5. Towards the XML schema measurement based on mapping between XML and OO domain

    NASA Astrophysics Data System (ADS)

    Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja

    2017-07-01

    Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.

  6. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions

    NASA Astrophysics Data System (ADS)

    Gide, Milind S.; Karam, Lina J.

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.

  7. Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo

    2010-08-01

    A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.

  8. Toward objective image quality metrics: the AIC Eval Program of the JPEG

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Larabi, Chaker

    2008-08-01

    Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.

  9. Image quality assessment metric for frame accumulated image

    NASA Astrophysics Data System (ADS)

    Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling

    2018-01-01

    The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.

  10. Sigma metrics used to assess analytical quality of clinical chemistry assays: importance of the allowable total error (TEa) target.

    PubMed

    Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten

    2014-07-01

    Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.

  11. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  12. PSQM-based RR and NR video quality metrics

    NASA Astrophysics Data System (ADS)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  13. Spread spectrum image watermarking based on perceptual quality metric.

    PubMed

    Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi

    2011-11-01

    Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.

  14. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    PubMed

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  15. Using fish communities to assess streams in Romania: Initial development of an index of biotic integrity

    USGS Publications Warehouse

    Angermeier, P.L.; Davideanu, G.

    2004-01-01

    Multimetric biotic indices increasingly are used to complement physicochemical data in assessments of stream quality. We initiated development of multimetric indices, based on fish communities, to assess biotic integrity of streams in two physiographic regions of central Romania. Unlike previous efforts to develop such indices for European streams, our metrics and scoring criteria were selected largely on the basis of empirical relations in the regions of interest. We categorised 54 fish species with respect to ten natural-history attributes, then used this information to compute 32 candidate metrics of five types (taxonomic, tolerance, abundance, reproductive, and feeding) for each of 35 sites. We assessed the utility of candidate metrics for detecting anthropogenic impact based on three criteria: (a) range of values taken, (b) relation to a site-quality index (SQI), which incorporated information on hydrologic alteration, channel alteration, land-use intensity, and water chemistry, and (c) metric redundancy. We chose seven metrics from each region to include in preliminary multimetric indices (PMIs). Both PMIs included taxonomic, tolerance, and feeding metrics, but only two metrics were common to both PMIs. Although we could not validate our PMIs, their strong association with the SQI in each region suggests that such indices would be valuable tools for assessing stream quality and could provide more comprehensive assessments than the traditional approaches based solely on water chemistry.

  16. Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    2015-05-01

    Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.

  17. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less

  18. Integrating automated support for a software management cycle into the TAME system

    NASA Technical Reports Server (NTRS)

    Sunazuka, Toshihiko; Basili, Victor R.

    1989-01-01

    Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.

  19. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  20. Large radius of curvature measurement based on the evaluation of interferogram-quality metric in non-null interferometry

    NASA Astrophysics Data System (ADS)

    Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan

    2018-03-01

    Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.

  1. Quality Measures for Dialysis: Time for a Balanced Scorecard

    PubMed Central

    2016-01-01

    Recent federal legislation establishes a merit-based incentive payment system for physicians, with a scorecard for each professional. The Centers for Medicare and Medicaid Services evaluate quality of care with clinical performance measures and have used these metrics for public reporting and payment to dialysis facilities. Similar metrics may be used for the future merit-based incentive payment system. In nephrology, most clinical performance measures measure processes and intermediate outcomes of care. These metrics were developed from population studies of best practice and do not identify opportunities for individualizing care on the basis of patient characteristics and individual goals of treatment. The In-Center Hemodialysis (ICH) Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey examines patients' perception of care and has entered the arena to evaluate quality of care. A balanced scorecard of quality performance should include three elements: population-based best clinical practice, patient perceptions, and individually crafted patient goals of care. PMID:26316622

  2. An exploratory survey of methods used to develop measures of performance

    NASA Astrophysics Data System (ADS)

    Hamner, Kenneth L.; Lafleur, Charles A.

    1993-09-01

    Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.

  3. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  4. Parameter Search Algorithms for Microwave Radar-Based Breast Imaging: Focal Quality Metrics as Fitness Functions.

    PubMed

    O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin

    2017-12-06

    Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.

  5. Memory colours and colour quality evaluation of conventional and solid-state lamps.

    PubMed

    Smet, Kevin A G; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2010-12-06

    A colour quality metric based on memory colours is presented. The basic idea is simple. The colour quality of a test source is evaluated as the degree of similarity between the colour appearance of a set of familiar objects and their memory colours. The closer the match, the better the colour quality. This similarity was quantified using a set of similarity distributions obtained by Smet et al. in a previous study. The metric was validated by calculating the Pearson and Spearman correlation coefficients between the metric predictions and the visual appreciation results obtained in a validation experiment conducted by the authors as well those obtained in two independent studies. The metric was found to correlate well with the visual appreciation of the lighting quality of the sources used in the three experiments. Its performance was also compared with that of the CIE colour rendering index and the NIST colour quality scale. For all three experiments, the metric was found to be significantly better at predicting the correct visual rank order of the light sources (p < 0.1).

  6. Quality of Information Approach to Improving Source Selection in Tactical Networks

    DTIC Science & Technology

    2017-02-01

    consider the performance of this process based on metrics relating to quality of information: accuracy, timeliness, completeness and reliability. These...that are indicators of that the network is meeting these quality requirements. We study effective data rate, social distance, link integrity and the...utility of information as metrics within a multi-genre network to determine the quality of information of its available sources. This paper proposes a

  7. Semantic Metrics for Analysis of Software

    NASA Technical Reports Server (NTRS)

    Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara

    2005-01-01

    A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.

  8. Assessing the quality of restored images in optical long-baseline interferometry

    NASA Astrophysics Data System (ADS)

    Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric

    2017-03-01

    Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.

  9. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  10. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  11. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    PubMed

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p < 0.0001). Gradient measures strongly correlated with target volume (p < 0.0001). The RTOG lung SBRT protocol advocated conformity guidelines for prescribed dose in all categories were met in ≥94% of cases. The proportion of total lung volume receiving doses of 20 Gy and 5 Gy (V 20 and V 5 ) were mean 4.8% (±3.2) and 16.4% (±9.2), respectively. Based on our study analyses, we recommend the following metrics as appropriate surrogates for establishing SBRT lung plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  12. Spatial-temporal distortion metric for in-service quality monitoring of any digital video system

    NASA Astrophysics Data System (ADS)

    Wolf, Stephen; Pinson, Margaret H.

    1999-11-01

    Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of (1) spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression, (2) the angular direction of the spatial gradient, (3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.

  13. Performance evaluation of no-reference image quality metrics for face biometric images

    NASA Astrophysics Data System (ADS)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  14. A Validation of Object-Oriented Design Metrics as Quality Indicators

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  15. Quality Measures for Dialysis: Time for a Balanced Scorecard.

    PubMed

    Kliger, Alan S

    2016-02-05

    Recent federal legislation establishes a merit-based incentive payment system for physicians, with a scorecard for each professional. The Centers for Medicare and Medicaid Services evaluate quality of care with clinical performance measures and have used these metrics for public reporting and payment to dialysis facilities. Similar metrics may be used for the future merit-based incentive payment system. In nephrology, most clinical performance measures measure processes and intermediate outcomes of care. These metrics were developed from population studies of best practice and do not identify opportunities for individualizing care on the basis of patient characteristics and individual goals of treatment. The In-Center Hemodialysis (ICH) Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey examines patients' perception of care and has entered the arena to evaluate quality of care. A balanced scorecard of quality performance should include three elements: population-based best clinical practice, patient perceptions, and individually crafted patient goals of care. Copyright © 2016 by the American Society of Nephrology.

  16. Algal Attributes: An Autecological Classification of Algal Taxa Collected by the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Porter, Stephen D.

    2008-01-01

    Algae are excellent indicators of water-quality conditions, notably nutrient and organic enrichment, and also are indicators of major ion, dissolved oxygen, and pH concentrations and stream microhabitat conditions. The autecology, or physiological optima and tolerance, of algal species for various water-quality contaminants and conditions is relatively well understood for certain groups of freshwater algae, notably diatoms. However, applications of autecological information for water-quality assessments have been limited because of challenges associated with compiling autecological literature from disparate sources, tracking name changes for a large number of algal species, and creating an autecological data base from which algal-indicator metrics can be calculated. A comprehensive summary of algal autecological attributes for North American streams and rivers does not exist. This report describes a large, digital data file containing 28,182 records for 5,939 algal taxa, generally species or variety, collected by the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. The data file includes 37 algal attributes classified by over 100 algal-indicator codes or metrics that can be calculated easily with readily available software. Algal attributes include qualitative classifications based on European and North American autecological literature, and semi-quantitative, weighted-average regression approaches for estimating optima using regional and national NAWQA data. Applications of algal metrics in water-quality assessments are discussed and national quartile distributions of metric scores are shown for selected indicator metrics.

  17. Initial Ada components evaluation

    NASA Technical Reports Server (NTRS)

    Moebes, Travis

    1989-01-01

    The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.

  18. Safety considerations in providing allergen immunotherapy in the office.

    PubMed

    Mattos, Jose L; Lee, Stella

    2016-06-01

    This review highlights the risks of allergy immunotherapy, methods to improve the quality and safety of allergy treatment, the current status of allergy quality metrics, and the future of quality measurement. In the current healthcare environment, the emphasis on outcomes measurement is increasing, and providers must be better equipped in the development, measurement, and reporting of safety and quality measures. Immunotherapy offers the only potential cure for allergic disease and asthma. Although well tolerated and effective, immunotherapy can be associated with serious consequence, including anaphylaxis and death. Many predisposing factors and errors that lead to serious systemic reactions are preventable, and the evaluation and implementation of quality measures are crucial to developing a safe immunotherapy practice. Although quality metrics for immunotherapy are in their infancy, they will become increasingly sophisticated, and providers will face increased pressure to deliver safe, high-quality, patient-centered, evidence-based, and efficient allergy care. The establishment of safety in the allergy office involves recognition of potential risk factors for anaphylaxis, the development and measurement of quality metrics, and changing systems-wide practices if needed. Quality improvement is a continuous process, and although national allergy-specific quality metrics do not yet exist, they are in development.

  19. A comparative study of multi-focus image fusion validation metrics

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael

    2016-05-01

    Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

  20. Essential metrics for assessing sex & gender integration in health research proposals involving human participants.

    PubMed

    Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A

    2017-01-01

    Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.

  1. Essential metrics for assessing sex & gender integration in health research proposals involving human participants

    PubMed Central

    Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.

    2017-01-01

    Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192

  2. National evaluation of multidisciplinary quality metrics for head and neck cancer.

    PubMed

    Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep

    2017-11-15

    The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.

  3. Algal bioassessment metrics for wadeable streams and rivers of Maine, USA

    USGS Publications Warehouse

    Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth

    2011-01-01

    Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.

  4. Pressure-specific and multiple pressure response of fish assemblages in European running waters☆

    PubMed Central

    Schinegger, Rafaela; Trautwein, Clemens; Schmutz, Stefan

    2013-01-01

    We classified homogenous river types across Europe and searched for fish metrics qualified to show responses to specific pressures (hydromorphological pressures or water quality pressures) vs. multiple pressures in these river types. We analysed fish taxa lists from 3105 sites in 16 ecoregions and 14 countries. Sites were pre-classified for 15 selected pressures to separate unimpacted from impacted sites. Hierarchical cluster analysis was used to split unimpacted sites into four homogenous river types based on species composition and geographical location. Classification trees were employed to predict associated river types for impacted sites with four environmental variables. We defined a set of 129 candidate fish metrics to select the best reacting metrics for each river type. The candidate metrics represented tolerances/intolerances of species associated with six metric types: habitat, migration, water quality sensitivity, reproduction, trophic level and biodiversity. The results showed that 17 uncorrelated metrics reacted to pressures in the four river types. Metrics responded specifically to water quality pressures and hydromorphological pressures in three river types and to multiple pressures in all river types. Four metrics associated with water quality sensitivity showed a significant reaction in up to three river types, whereas 13 metrics were specific to individual river types. Our results contribute to the better understanding of fish assemblage response to human pressures at a pan-European scale. The results are especially important for European river management and restoration, as it is necessary to uncover underlying processes and effects of human pressures on aquatic communities. PMID:24003262

  5. Floristic Quality Index for woodland ground flora restoration: Utility and effectiveness in a fire-managed landscape

    Treesearch

    Calvin J. Maginel; Benjamin O. Knapp; John M. Kabrick; Rose-Marie Muzika

    2016-01-01

    Monitoring is a critical component of ecological restoration and requires the use of metrics that are meaningful and interpretable. We analyzed the effectiveness of the Floristic Quality Index (FQI), a vegetative community metric based on species richness and the level of sensitivity to anthropogenic disturbance of individual species present (Coefficient of...

  6. A laser beam quality definition based on induced temperature rise.

    PubMed

    Miller, Harold C

    2012-12-17

    Laser beam quality metrics like M(2) can be used to describe the spot sizes and propagation behavior of a wide variety of non-ideal laser beams. However, for beams that have been diffracted by limiting apertures in the near-field, or those with unusual near-field profiles, the conventional metrics can lead to an inconsistent or incomplete description of far-field performance. This paper motivates an alternative laser beam quality definition that can be used with any beam. The approach uses a consideration of the intrinsic ability of a laser beam profile to heat a material. Comparisons are made with conventional beam quality metrics. An analysis on an asymmetric Gaussian beam is used to establish a connection with the invariant beam propagation ratio.

  7. The Albuquerque Seismological Laboratory Data Quality Analyzer

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Hagerty, M.; Holland, J.; Gee, L. S.; Wilson, D.

    2013-12-01

    The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several efforts underway to improve data quality at its stations. The Data Quality Analyzer (DQA) is one such development. The DQA is designed to characterize station data quality in a quantitative and automated manner. Station quality is based on the evaluation of various metrics, such as timing quality, noise levels, sensor coherence, and so on. These metrics are aggregated into a measurable grade for each station. The DQA consists of a website, a metric calculator (Seedscan), and a PostgreSQL database. The website allows the user to make requests for various time periods, review specific networks and stations, adjust weighting of the station's grade, and plot metrics as a function of time. The website dynamically loads all station data from a PostgreSQL database. The database is central to the application; it acts as a hub where metric values and limited station descriptions are stored. Data is stored at the level of one sensor's channel per day. The database is populated by Seedscan. Seedscan reads and processes miniSEED data, to generate metric values. Seedscan, written in Java, compares hashes of metadata and data to detect changes and perform subsequent recalculations. This ensures that the metric values are up to date and accurate. Seedscan can be run in a scheduled task or on demand by way of a config file. It will compute metrics specified in its configuration file. While many metrics are currently in development, some are completed and being actively used. These include: availability, timing quality, gap count, deviation from the New Low Noise Model, deviation from a station's noise baseline, inter-sensor coherence, and data-synthetic fits. In all, 20 metrics are planned, but any number could be added. ASL is actively using the DQA on a daily basis for station diagnostics and evaluation. As Seedscan is scheduled to run every night, data quality analysts are able to then use the website to diagnose changes in noise levels or other anomalous data. This allows for errors to be corrected quickly and efficiently. The code is designed to be flexible for adding metrics and portable for use in other networks. We anticipate further development of the DQA by improving the existing web-interface, adding more metrics, adding an interface to facilitate the verification of historic station metadata and performance, and an interface to allow better monitoring of data quality goals.

  8. Adapting the ISO 20462 softcopy ruler method for online image quality studies

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Phillips, Jonathan B.; Williams, Don

    2013-01-01

    In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.

  9. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  10. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  11. Empirical Assessment of the Impact of Low-Cost Generic Programs on Adherence-Based Quality Measures

    PubMed Central

    Pauly, Nathan J.; Talbert, Jeffery C.; Brown, Joshua D.

    2017-01-01

    In the United States, federally-funded health plans are mandated to measure the quality of care. Adherence-based medication quality metrics depend on completeness of administrative claims data for accurate measurement. Low-cost generic programs (LCGPs) cause medications fills to be missing from claims data as medications are not adjudicated through a patient’s insurance. This study sought to assess the magnitude of the impact of LCGPs on these quality measures. Data from the 2012–2013 Medical Expenditure Panel Survey (MEPS) were used. Medication fills for select medication classes were classified as LCGP fills and individuals were classified as never, sometimes, and always users of LCGPs. Individuals were classified based on insurance type (private, Medicare, Medicaid, dual-eligible). The proportion of days covered (PDC) was calculated for each medication class and the proportion of users with PDC ≥ 0.80 was reported as an observed metric for what would be calculated based on claims data and a true metric which included missing medication fills due to LCGPs. True measures of adherence were higher than the observed measures. The effect’s magnitude was highest for private insurance and for medication classes utilized more often through LCGPs. Thus, medication-based quality measures may be underestimated due to LCGPs. PMID:28970427

  12. Quality assessment of color images based on the measure of just noticeable color difference

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien; Hsu, Yun-Hsiang

    2014-01-01

    Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.

  13. Quality Metrics in Neonatal and Pediatric Critical Care Transport: A National Delphi Project.

    PubMed

    Schwartz, Hamilton P; Bigham, Michael T; Schoettker, Pamela J; Meyer, Keith; Trautman, Michael S; Insoft, Robert M

    2015-10-01

    The transport of neonatal and pediatric patients to tertiary care facilities for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. In 2011, pediatric transport teams in Ohio met to identify quality indicators permitting comparisons among programs. However, no set of national consensus quality metrics exists for benchmarking transport teams. The aim of this project was to achieve national consensus on appropriate neonatal and pediatric transport quality metrics. Modified Delphi technique. The first round of consensus determination was via electronic mail survey, followed by rounds of consensus determination in-person at the American Academy of Pediatrics Section on Transport Medicine's 2012 Quality Metrics Summit. All attendees of the American Academy of Pediatrics Section on Transport Medicine Quality Metrics Summit, conducted on October 21-23, 2012, in New Orleans, LA, were eligible to participate. Candidate quality metrics were identified through literature review and those metrics currently tracked by participating programs. Participants were asked in a series of rounds to identify "very important" quality metrics for transport. It was determined a priori that consensus on a metric's importance was achieved when at least 70% of respondents were in agreement. This is consistent with other Delphi studies. Eighty-two candidate metrics were considered initially. Ultimately, 12 metrics achieved consensus as "very important" to transport. These include metrics related to airway management, team mobilization time, patient and crew injuries, and adverse patient care events. Definitions were assigned to the 12 metrics to facilitate uniform data tracking among programs. The authors succeeded in achieving consensus among a diverse group of national transport experts on 12 core neonatal and pediatric transport quality metrics. We propose that transport teams across the country use these metrics to benchmark and guide their quality improvement activities.

  14. Metrication report to the Congress. 1991 activities and 1992 plans

    NASA Technical Reports Server (NTRS)

    1991-01-01

    During 1991, NASA approved a revised metric use policy and developed a NASA Metric Transition Plan. This Plan targets the end of 1995 for completion of NASA's metric initiatives. This Plan also identifies future programs that NASA anticipates will use the metric system of measurement. Field installations began metric transition studies in 1991 and will complete them in 1992. Half of NASA's Space Shuttle payloads for 1991, and almost all such payloads for 1992, have some metric-based elements. In 1992, NASA will begin assessing requirements for space-quality piece parts fabricated to U.S. metric standards, leading to development and qualification of high priority parts.

  15. "Can you see me now?" An objective metric for predicting intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Francis M.; Hemami, Sheila S.

    2007-02-01

    For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.

  16. Application of process mining to assess the data quality of routinely collected time-based performance data sourced from electronic health records by validating process conformance.

    PubMed

    Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris

    2016-12-01

    Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.

  17. Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan

    2014-03-01

    We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.

  18. Objective evaluation of interior noise booming in a passenger car based on sound metrics and artificial neural networks.

    PubMed

    Lee, Hyun-Ho; Lee, Sang-Kwon

    2009-09-01

    Booming sound is one of the important sounds in a passenger car. The aim of the paper is to develop the objective evaluation method of interior booming sound. The development method is based on the sound metrics and ANN (artificial neural network). The developed method is called the booming index. Previous work maintained that booming sound quality is related to loudness and sharpness--the sound metrics used in psychoacoustics--and that the booming index is developed by using the loudness and sharpness for a signal within whole frequency between 20 Hz and 20 kHz. In the present paper, the booming sound quality was found to be effectively related to the loudness at frequencies below 200 Hz; thus the booming index is updated by using the loudness of the signal filtered by the low pass filter at frequency under 200 Hz. The relationship between the booming index and sound metric is identified by an ANN. The updated booming index has been successfully applied to the objective evaluation of the booming sound quality of mass-produced passenger cars.

  19. qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*

    PubMed Central

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-01-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958

  20. qcML: an exchange format for quality control metrics from mass spectrometry experiments.

    PubMed

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-08-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  1. An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN

    PubMed Central

    Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX. PMID:25250379

  2. An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.

    PubMed

    Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.

  3. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be used as an independent standardized procedure for detector performance assessment. © 2016 The Authors.

  4. Evaluation of cassette‐based digital radiography detectors using standardized image quality metrics: AAPM TG‐150 Draft Image Detector Tests

    PubMed Central

    Greene, Travis C.; Nishino, Thomas K.; Willis, Charles E.

    2016-01-01

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region‐of‐interest (ROI)‐based techniques to measure nonuniformity, minimum signal‐to‐noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX‐1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG‐150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG‐150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG‐150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG‐150 tests can be used as an independent standardized procedure for detector performance assessment. PACS number(s): 87.57.‐s, 87.57.C PMID:27685102

  5. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  6. CUQI: cardiac ultrasound video quality index

    PubMed Central

    Razaak, Manzoor; Martini, Maria G.

    2016-01-01

    Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715

  7. Measurement of Chronic Pain and Opioid Use Evaluation in Community-Based Persons with Serious Illnesses

    PubMed Central

    Naidu, Ramana K.

    2018-01-01

    Abstract Background: Chronic pain associated with serious illnesses is having a major impact on population health in the United States. Accountability for high quality care for community-dwelling patients with serious illnesses requires selection of metrics that capture the burden of chronic pain whose treatment may be enhanced or complicated by opioid use. Objective: Our aim was to evaluate options for assessing pain in seriously ill community dwelling adults, to discuss the use/abuse of opioids in individuals with chronic pain, and to suggest pain and opioid use metrics that can be considered for screening and evaluation of patient responses and quality care. Design: Structured literature review. Measurements: Evaluation of pain and opioid use assessment metrics and measures for their potential usefulness in the community. Results: Several pain and opioid assessment instruments are available for consideration. Yet, no one pain instrument has been identified as “the best” to assess pain in seriously ill community-dwelling patients. Screening tools exist that are specific to the assessment of risk in opioid management. Opioid screening can assess risk based on substance use history, general risk taking, and reward-seeking behavior. Conclusions: Accountability for high quality care for community-dwelling patients requires selection of metrics that will capture the burden of chronic pain and beneficial use or misuse of opioids. Future research is warranted to identify, modify, or develop instruments that contain important metrics, demonstrate a balance between sensitivity and specificity, and address patient preferences and quality outcomes. PMID:29091525

  8. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  9. Comparing image quality of print-on-demand books and photobooks from web-based vendors

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell

    2010-01-01

    Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.

  10. Establishing Quantitative Software Metrics in Department of the Navy Programs

    DTIC Science & Technology

    2016-04-01

    13 Quality to Metrics Dependency Matrix...11 7. Quality characteristics to metrics dependecy matrix...In accomplishing this goal, a need exists for a formalized set of software quality metrics . This document establishes the validity of those necessary

  11. Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M

    2017-02-01

    Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.

  12. Modulated evaluation metrics for drug-based ontologies.

    PubMed

    Amith, Muhammad; Tao, Cui

    2017-04-24

    Research for ontology evaluation is scarce. If biomedical ontological datasets and knowledgebases are to be widely used, there needs to be quality control and evaluation for the content and structure of the ontology. This paper introduces how to effectively utilize a semiotic-inspired approach to ontology evaluation, specifically towards drug-related ontologies hosted on the National Center for Biomedical Ontology BioPortal. Using the semiotic-based evaluation framework for drug-based ontologies, we adjusted the quality metrics based on the semiotic features of drug ontologies. Then, we compared the quality scores before and after tailoring. The scores revealed a more precise measurement and a closer distribution compared to the before-tailoring. The results of this study reveal that a tailored semiotic evaluation produced a more meaningful and accurate assessment of drug-based ontologies, lending to the possible usefulness of semiotics in ontology evaluation.

  13. Value-Based Assessment of Radiology Reporting Using Radiologist-Referring Physician Two-Way Feedback System-a Design Thinking-Based Approach.

    PubMed

    Shaikh, Faiq; Hendrata, Kenneth; Kolowitz, Brian; Awan, Omer; Shrestha, Rasu; Deible, Christopher

    2017-06-01

    In the era of value-based healthcare, many aspects of medical care are being measured and assessed to improve quality and reduce costs. Radiology adds enormously to health care costs and is under pressure to adopt a more efficient system that incorporates essential metrics to assess its value and impact on outcomes. Most current systems tie radiologists' incentives and evaluations to RVU-based productivity metrics and peer-review-based quality metrics. In a new potential model, a radiologist's performance will have to increasingly depend on a number of parameters that define "value," beginning with peer review metrics that include referrer satisfaction and feedback from radiologists to the referring physician that evaluates the potency and validity of clinical information provided for a given study. These new dimensions of value measurement will directly impact the cascade of further medical management. We share our continued experience with this project that had two components: RESP (Referrer Evaluation System Pilot) and FRACI (Feedback from Radiologist Addressing Confounding Issues), which were introduced to the clinical radiology workflow in order to capture referrer-based and radiologist-based feedback on radiology reporting. We also share our insight into the principles of design thinking as applied in its planning and execution.

  14. Specification-based software sizing: An empirical investigation of function metrics

    NASA Technical Reports Server (NTRS)

    Jeffery, Ross; Stathis, John

    1993-01-01

    For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.

  15. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    PubMed

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  16. Real-time video quality monitoring

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey

    2011-12-01

    The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.

  17. Efficacy of single and multi-metric fish-based indices in tracking anthropogenic pressures in estuaries: An 8-year case study.

    PubMed

    Martinho, Filipe; Nyitrai, Daniel; Crespo, Daniel; Pardal, Miguel A

    2015-12-15

    Facing a generalized increase in water degradation, several programmes have been implemented for protecting and enhancing the water quality and associated wildlife, which rely on ecological indicators to assess the degree of deviation from a pristine state. Here, single (species number, Shannon-Wiener H', Pielou J') and multi-metric (Estuarine Fish Assessment Index, EFAI) community-based ecological quality measures were evaluated in a temperate estuary over an 8-year period (2005-2012), and established their relationships with an anthropogenic pressure index (API). Single metric indices were highly variable and neither concordant amongst themselves nor with the EFAI. The EFAI was the only index significantly correlated with the API, indicating that higher ecological quality was associated with lower anthropogenic pressure. Pressure scenarios were related with specific fish community composition, as a result of distinct food web complexity and nursery functioning of the estuary. Results were discussed in the scope of the implementation of water protection programmes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Comparison of macroinvertebrate-derived stream quality metrics between snag and riffle habitats

    USGS Publications Warehouse

    Stepenuck, K.F.; Crunkilton, R.L.; Bozek, Michael A.; Wang, L.

    2008-01-01

    We compared benthic macroinvertebrate assemblage structure at snag and riffle habitats in 43 Wisconsin streams across a range of watershed urbanization using a variety of stream quality metrics. Discriminant analysis indicated that dominant taxa at riffles and snags differed; Hydropsychid caddisflies (Hydropsyche betteni and Cheumatopsyche spp.) and elmid beetles (Optioservus spp. and Stenemlis spp.) typified riffles, whereas isopods (Asellus intermedius) and amphipods (Hyalella azteca and Gammarus pseudolimnaeus) predominated in snags. Analysis of covariance indicated that samples from snag and riffle habitats differed significantly in their response to the urbanization gradient for the Hilsenhoff biotic index (BI), Shannon's diversity index, and percent of filterers, shredders, and pollution intolerant Ephemeroptera, Plecoptera, and Trichoptera (EPT) at each stream site (p ??? 0.10). These differences suggest that although macroinvertebrate assemblages present in either habitat type are sensitive to detecting the effects of urbanization, metrics derived from different habitats should not be intermixed when assessing stream quality through biomonitoring. This can be a limitation to resource managers who wish to compare water quality among streams where the same habitat type is not available at all stream locations, or where a specific habitat type (i.e., a riffle) is required to determine a metric value (i.e., BI). To account for differences in stream quality at sites lacking riffle habitat, snag-derived metric values can be adjusted based on those obtained from riffles that have been exposed to the same level of urbanization. Comparison of nonlinear regression equations that related stream quality metric values from the two habitat types to percent watershed urbanization indicated that snag habitats had on average 30.2 fewer percent EPT individuals, a lower diversity index value than riffles, and a BI value of 0.29 greater than riffles. ?? 2008 American Water Resources Association.

  19. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE PAGES

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    2016-02-28

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  20. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  1. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  2. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  3. High-quality cardiopulmonary resuscitation: current and future directions.

    PubMed

    Abella, Benjamin S

    2016-06-01

    Cardiopulmonary resuscitation (CPR) represents the cornerstone of cardiac arrest resuscitation care. Prompt delivery of high-quality CPR can dramatically improve survival outcomes; however, the definitions of optimal CPR have evolved over several decades. The present review will discuss the metrics of CPR delivery, and the evidence supporting the importance of CPR quality to improve clinical outcomes. The introduction of new technologies to quantify metrics of CPR delivery has yielded important insights into CPR quality. Investigations using CPR recording devices have allowed the assessment of specific CPR performance parameters and their relative importance regarding return of spontaneous circulation and survival to hospital discharge. Additional work has suggested new opportunities to measure physiologic markers during CPR and potentially tailor CPR delivery to patient requirements. Through recent laboratory and clinical investigations, a more evidence-based definition of high-quality CPR continues to emerge. Exciting opportunities now exist to study quantitative metrics of CPR and potentially guide resuscitation care in a goal-directed fashion. Concepts of high-quality CPR have also informed new approaches to training and quality improvement efforts for cardiac arrest care.

  4. Development of quality metrics for ambulatory care in pediatric patients with tetralogy of Fallot.

    PubMed

    Villafane, Juan; Edwards, Thomas C; Diab, Karim A; Satou, Gary M; Saarel, Elizabeth; Lai, Wyman W; Serwer, Gerald A; Karpawich, Peter P; Cross, Russell; Schiff, Russell; Chowdhury, Devyani; Hougen, Thomas J

    2017-12-01

    The objective of this study was to develop quality metrics (QMs) relating to the ambulatory care of children after complete repair of tetralogy of Fallot (TOF). A workgroup team (WT) of pediatric cardiologists with expertise in all aspects of ambulatory cardiac management was formed at the request of the American College of Cardiology (ACC) and the Adult Congenital and Pediatric Cardiology Council (ACPC), to review published guidelines and consensus data relating to the ambulatory care of repaired TOF patients under the age of 18 years. A set of quality metrics (QMs) was proposed by the WT. The metrics went through a two-step evaluation process. In the first step, the RAND-UCLA modified Delphi methodology was employed and the metrics were voted on feasibility and validity by an expert panel. In the second step, QMs were put through an "open comments" process where feedback was provided by the ACPC members. The final QMs were approved by the ACPC council. The TOF WT formulated 9 QMs of which only 6 were submitted to the expert panel; 3 QMs passed the modified RAND-UCLA and went through the "open comments" process. Based on the feedback through the open comment process, only 1 metric was finally approved by the ACPC council. The ACPC Council was able to develop QM for ambulatory care of children with repaired TOF. These patients should have documented genetic testing for 22q11.2 deletion. However, lack of evidence in the literature made it a challenge to formulate other evidence-based QMs. © 2017 Wiley Periodicals, Inc.

  5. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  6. A method for the use of landscape metrics in freshwater research and management

    USGS Publications Warehouse

    Kearns, F.R.; Kelly, N.M.; Carter, J.L.; Resh, V.H.

    2005-01-01

    Freshwater research and management efforts could be greatly enhanced by a better understanding of the relationship between landscape-scale factors and water quality indicators. This is particularly true in urban areas, where land transformation impacts stream systems at a variety of scales. Despite advances in landscape quantification methods, several studies attempting to elucidate the relationship between land use/land cover (LULC) and water quality have resulted in mixed conclusions. However, these studies have largely relied on compositional landscape metrics. For urban and urbanizing watersheds in particular, the use of metrics that capture spatial pattern may further aid in distinguishing the effects of various urban growth patterns, as well as exploring the interplay between environmental and socioeconomic variables. However, to be truly useful for freshwater applications, pattern metrics must be optimized based on characteristic watershed properties and common water quality point sampling methods. Using a freely available LULC data set for the Santa Clara Basin, California, USA, we quantified landscape composition and configuration for subwatershed areas upstream of individual sampling sites, reducing the number of metrics based on: (1) sensitivity to changes in extent and (2) redundancy, as determined by a multivariate factor analysis. The first two factors, interpreted as (1) patch density and distribution and (2) patch shape and landscape subdivision, explained approximately 85% of the variation in the data set, and are highly reflective of the heterogeneous urban development pattern found in the study area. Although offering slightly less explanatory power, compositional metrics can provide important contextual information. ?? Springer 2005.

  7. A proteomics performance standard to support measurement quality in proteomics.

    PubMed

    Beasley-Green, Ashley; Bunk, David; Rudnick, Paul; Kilpatrick, Lisa; Phinney, Karen

    2012-04-01

    The emergence of MS-based proteomic platforms as a prominent technology utilized in biochemical and biomedical research has increased the need for high-quality MS measurements. To address this need, National Institute of Standards and Technology (NIST) reference material (RM) 8323 yeast protein extract is introduced as a proteomics quality control material for benchmarking the preanalytical and analytical performance of proteomics-based experimental workflows. RM 8323 yeast protein extract is based upon the well-characterized eukaryote Saccharomyces cerevisiae and can be utilized in the design and optimization of proteomics-based methodologies from sample preparation to data analysis. To demonstrate its utility as a proteomics quality control material, we coupled LC-MS/MS measurements of RM 8323 with the NIST MS Quality Control (MSQC) performance metrics to quantitatively assess the LC-MS/MS instrumentation parameters that influence measurement accuracy, repeatability, and reproducibility. Due to the complexity of the yeast proteome, we also demonstrate how NIST RM 8323, along with the NIST MSQC performance metrics, can be used in the evaluation and optimization of proteomics-based sample preparation methods. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Developing Quality Physical Education through Student Assessments

    ERIC Educational Resources Information Center

    Fisette, Jennifer L.; Placek, Judith H.; Avery, Marybell; Dyson, Ben; Fox, Connie; Franck, Marian; Graber, Kim; Rink, Judith; Zhu, Weimo

    2009-01-01

    The National Association of Sport and Physical Education (NASPE) is committed to providing teachers with the support and guiding principles for implementing valid assessments. Its goal is for physical educators to utilize PE Metrics to measure student learning based on the national standards. The first PE Metrics text provides teachers with…

  9. Quality evaluation of motion-compensated edge artifacts in compressed video.

    PubMed

    Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R

    2007-04-01

    Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.

  10. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  11. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  12. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    PubMed Central

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786

  13. Development of a multimetric index for assessing the biological condition of the Ohio River

    USGS Publications Warehouse

    Emery, E.B.; Simon, T.P.; McCormick, F.H.; Angermeier, P.L.; Deshon, J.E.; Yoder, C.O.; Sanders, R.E.; Pearson, W.D.; Hickman, G.D.; Reash, R.J.; Thomas, J.A.

    2003-01-01

    The use of fish communities to assess environmental quality is common for streams, but a standard methodology for large rivers is as yet largely undeveloped. We developed an index to assess the condition of fish assemblages along 1,580 km of the Ohio River. Representative samples of fish assemblages were collected from 709 Ohio River reaches, including 318 "least-impacted" sites, from 1991 to 2001 by means of standardized nighttime boat-electrofishing techniques. We evaluated 55 candidate metrics based on attributes of fish assemblage structure and function to derive a multimetric index of river health. We examined the spatial (by river kilometer) and temporal variability of these metrics and assessed their responsiveness to anthropogenic disturbances, namely, effluents, turbidity, and highly embedded substrates. The resulting Ohio River Fish Index (ORFIn) comprises 13 metrics selected because they responded predictably to measures of human disturbance or reflected desirable features of the Ohio River. We retained two metrics (the number of intolerant species and the number of sucker species [family Catostomidae]) from Karr's original index of biotic integrity. Six metrics were modified from indices developed for the upper Ohio River (the number of native species; number of great-river species; number of centrarchid species; the number of deformities, eroded fins and barbels, lesions, and tumors; percent individuals as simple lithophils; and percent individuals as tolerant species). We also incorporated three trophic metrics (the percent of individuals as detritivores, invertivores, and piscivores), one metric based on catch per unit effort, and one metric based on the percent of individuals as nonindigenous fish species. The ORFIn declined significantly where anthropogenic effects on substrate and water quality were prevalent and was significantly lower in the first 500 m below point source discharges than at least-impacted sites nearby. Although additional research on the temporal stability of the metrics and index will likely enhance the reliability of the ORFIn, its incorporation into Ohio River assessments still represents an improvement over current physicochemical protocols.

  14. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, S; Mehta, V

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoringmore » function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the metrics. This will improve the safe delivery of large doses for these patients.« less

  15. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures.

    PubMed

    Epele, Luis Beltrán; Miserendino, María Laura

    2015-01-01

    Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa) at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements). Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA) revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services.

  16. Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.

    PubMed

    Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida

    2016-06-28

    During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.

  17. Establishing Qualitative Software Metrics in Department of the Navy Programs

    DTIC Science & Technology

    2015-10-29

    dedicated to provide the highest quality software to its users. In doing, there is a need for a formalized set of Software Quality Metrics . The goal...of this paper is to establish the validity of those necessary Quality metrics . In our approach we collected the data of over a dozen programs...provide the necessary variable data for our formulas and tested the formulas for validity. Keywords: metrics ; software; quality I. PURPOSE Space

  18. HealthTrust: a social network approach for retrieving online health videos.

    PubMed

    Fernandez-Luque, Luis; Karlsen, Randi; Melton, Genevieve B

    2012-01-31

    Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust's filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r₁₀ = .65, P = .02) and a trend toward significance with health consumers (r₇ = .65, P = .06) with videos on hemoglobinA(1c), but it did not perform as well with diabetic foot videos. The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities.

  19. Application of Sigma Metrics Analysis for the Assessment and Modification of Quality Control Program in the Clinical Chemistry Laboratory of a Tertiary Care Hospital.

    PubMed

    Iqbal, Sahar; Mustansar, Tazeen

    2017-03-01

    Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found <3. The lowest value for sigma was found for chloride (1.1) at L2. The highest value of sigma was found for creatinine (10.1) at L3. HDL was found with the highest sigma values at both control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value <3 are required strict monitoring and modification in quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.

  20. Guiding Principles and Checklist for Population-Based Quality Metrics

    PubMed Central

    Brunelli, Steven M.; Maddux, Franklin W.; Parker, Thomas F.; Johnson, Douglas; Nissenson, Allen R.; Collins, Allan; Lacson, Eduardo

    2014-01-01

    The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. PMID:24558050

  1. Predicting the Overall Spatial Quality of Automotive Audio Systems

    NASA Astrophysics Data System (ADS)

    Koya, Daisuke

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.

  2. Attitudes and Opinions of Canadian Nephrologists Toward Continuous Quality Improvement Options.

    PubMed

    Iskander, Carina; McQuillan, Rory; Nesrallah, Gihad; Rabbat, Christian; Mendelssohn, David C

    2017-01-01

    A shift to holding individual physicians accountable for patient outcomes, rather than facilities, is intuitively attractive to policy makers and to the public. We were interested in nephrologists' attitudes to, and awareness of, quality metrics and how nephrologists would view a potential switch from the current model of facility-based quality measurement and reporting to publically available reports at the individual physician level. The study was conducted using a web-based survey instrument (Online Appendix 1). The survey was initially pilot tested on a group of 8 nephrologists from across Canada. The survey was then finalized and e-mailed to 330 nephrologists through the Canadian Society of Nephrology (CSN) e-mail distribution list. The 127 respondents were 80% university based, and 33% were medical/dialysis directors. The response rate was 43%. Results demonstrate that 89% of Canadian nephrologists are engaged in efforts to improve the quality of patient care. A minority of those surveyed (29%) had training in quality improvement. They feel accountable for this and would welcome the inclusion of patient-centered metrics of care quality. Support for public reporting as an effective strategy on an individual nephrologist level was 30%. Support for public reporting of individual nephrologist performance was low. The care of nephrology patients will be best served by the continued development of a critical mass of physicians trained in patient safety and quality improvement, by focusing on patient-centered metrics of care delivery, and by validating that all proposed new methods are shown to improve patient care and outcomes.

  3. Reduced reference image quality assessment via sub-image similarity based redundancy measurement

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Xue, Wufeng; Zhang, Lei

    2012-03-01

    The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.

  4. Calculation and use of an environment's characteristic software metric set

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    Since both cost/quality and production environments differ, this study presents an approach for customizing a characteristic set of software metrics to an environment. The approach is applied in the Software Engineering Laboratory (SEL), a NASA Goddard production environment, to 49 candidate process and product metrics of 652 modules from six (51,000 to 112,000 lines) projects. For this particular environment, the method yielded the characteristic metric set (source lines, fault correction effort per executable statement, design effort, code effort, number of I/O parameters, number of versions). The uses examined for a characteristic metric set include forecasting the effort for development, modification, and fault correction of modules based on historical data.

  5. Empirical Evaluation of Hunk Metrics as Bug Predictors

    NASA Astrophysics Data System (ADS)

    Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz

    Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.

  6. [Clinical trial data management and quality metrics system].

    PubMed

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.

  7. Analysis of Open Education Service Quality with the Descriptive-Quantitative Approach

    ERIC Educational Resources Information Center

    Priyogi, Bilih; Santoso, Harry B.; Berliyanto; Hasibuan, Zainal A.

    2017-01-01

    The concept of Open Education (OE) is based on the philosophy of e-Learning which aims to provide learning environment anywhere, anytime, and for anyone. One of the main issue in the development of OE services is the availability of the quality assurance mechanism. This study proposes a metric for measuring the quality of OE service. Based on…

  8. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    NASA Technical Reports Server (NTRS)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  9. Better big data.

    PubMed

    Al Kazzi, Elie S; Hutfless, Susan

    2015-01-01

    By 2018, Medicare payments will be tied to quality of care. The Centers for Medicare and Medicaid Services currently use quality-based metric for some reimbursements through their different programs. Existing and future quality metrics will rely on risk adjustment to avoid unfairly punishing those who see the sickest, highest-risk patients. Despite the limitations of the data used for risk adjustment, there are potential solutions to improve the accuracy of these codes by calibrating data by merging databases and compiling information collected for multiple reporting programs to improve accuracy. In addition, healthcare staff should be informed about the importance of risk adjustment for quality of care assessment and reimbursement. As the number of encounters tied to value-based reimbursements increases in inpatient and outpatient care, coupled with accurate data collection and utilization, the methods used for risk adjustment could be expanded to better account for differences in the care delivered in diverse settings.

  10. Results, Knowledge, and Attitudes Regarding an Incentive Compensation Plan in a Hospital-Based, Academic, Employed Physician Multispecialty Group.

    PubMed

    Dolan, Robert W; Nesto, Richard; Ellender, Stacey; Luccessi, Christopher

    Hospitals and healthcare systems are introducing incentive metrics into compensation plans that align with value-based payment methodologies. These incentive measures should be considered a practical application of the transition from volume to value and will likely replace traditional productivity-based compensation in the future. During the transition, there will be provider resistance and implementation challenges. This article examines a large multispecialty group's experience with a newly implemented incentive compensation plan including the structure of the plan, formulas for calculation of the payments, the mix of quality and productivity metrics, and metric threshold achievement. Three rounds of surveys with comments were collected to measure knowledge and attitudes regarding the plan. Lessons learned and specific recommendations for success are described. The participant's knowledge and attitudes regarding the plan are important considerations and affect morale and engagement. Significant provider dissatisfaction with the plan was found. Careful metric selection, design, and management are critical activities that will facilitate provider acceptance and support. Improvements in data collection and reporting will be needed to produce reliable metrics that can supplant traditional volume-based productivity measures.

  11. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory.

    PubMed

    Kumar, B Vinodh; Mohan, Thuthi

    2018-01-01

    Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.

  12. Workshop summary: 'Integrating air quality and climate mitigation - is there a need for new metrics to support decision making?'

    NASA Astrophysics Data System (ADS)

    von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.

    2013-12-01

    Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these metrics, (c) identify gaps in the current information base, whether from the scientific development of metrics or their application by different users.

  13. More quality measures versus measuring what matters: a call for balance and parsimony

    PubMed Central

    Nelson, Eugene C; Pryor, David B; James, Brent; Swensen, Stephen J; Kaplan, Gary S; Weissberg, Jed I; Bisognano, Maureen; Yates, Gary R; Hunt, Gordon C

    2012-01-01

    External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Although benefits have accrued from the growth in quality measurement, the recent explosion in the number of measures threatens to shift resources from improving quality to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care, and lower per capita costs). Here we propose a policy that quality measurement should be: balanced to meet the need of end users to judge quality and cost performance and the need of providers to continuously improve the quality, outcomes and costs of their services; and parsimonious to measure quality, outcomes and costs with appropriate metrics that are selected based on end-user needs. PMID:22893696

  14. More quality measures versus measuring what matters: a call for balance and parsimony.

    PubMed

    Meyer, Gregg S; Nelson, Eugene C; Pryor, David B; James, Brent; Swensen, Stephen J; Kaplan, Gary S; Weissberg, Jed I; Bisognano, Maureen; Yates, Gary R; Hunt, Gordon C

    2012-11-01

    External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Although benefits have accrued from the growth in quality measurement, the recent explosion in the number of measures threatens to shift resources from improving quality to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care, and lower per capita costs). Here we propose a policy that quality measurement should be: balanced to meet the need of end users to judge quality and cost performance and the need of providers to continuously improve the quality, outcomes and costs of their services; and parsimonious to measure quality, outcomes and costs with appropriate metrics that are selected based on end-user needs.

  15. Evaluating software development characteristics: Assessment of software measures in the Software Engineering Laboratory. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1981-01-01

    Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.

  16. Pharmacy Dashboard: An Innovative Process for Pharmacy Workload and Productivity.

    PubMed

    Kinney, Ashley; Bui, Quyen; Hodding, Jane; Le, Jennifer

    2017-03-01

    Background: Innovative approaches, including LEAN systems and dashboards, to enhance pharmacy production continue to evolve in a cost and safety conscious health care environment. Furthermore, implementing and evaluating the effectiveness of these novel methods continues to be challenging for pharmacies. Objective: To describe a comprehensive, real-time pharmacy dashboard that incorporated LEAN methodologies and evaluate its utilization in an inpatient Central Intravenous Additives Services (CIVAS) pharmacy. Methods: Long Beach Memorial Hospital (462 adult beds) and Miller Children's and Women's Hospital of Long Beach (combined 324 beds) are tertiary not-for-profit, community-based hospitals that are served by one CIVAS pharmacy. Metrics to evaluate the effectiveness of CIVAS were developed and implemented on a dashboard in real-time from March 2013 to March 2014. Results: The metrics that were designed and implemented to evaluate the effectiveness of CIVAS were quality and value, financial resilience, and the department's people and culture. Using a dashboard that integrated these metrics, the accuracy of manufacturing defect-free products was ≥99.9%, indicating excellent quality and value of CIVAS. The metric for financial resilience demonstrated a cost savings of $78,000 annually within pharmacy by eliminating the outsourcing of products. People and value metrics on the dashboard focused on standard work, with an overall 94.6% compliance to the workflow. Conclusion: A unique dashboard that incorporated metrics to monitor 3 important areas was successfully implemented to improve the effectiveness of CIVAS pharmacy. These metrics helped pharmacy to monitor progress in real-time, allowing attainment of production goals and fostering continuous quality improvement through LEAN work.

  17. Pharmacy Dashboard: An Innovative Process for Pharmacy Workload and Productivity

    PubMed Central

    Bui, Quyen; Hodding, Jane; Le, Jennifer

    2017-01-01

    Background: Innovative approaches, including LEAN systems and dashboards, to enhance pharmacy production continue to evolve in a cost and safety conscious health care environment. Furthermore, implementing and evaluating the effectiveness of these novel methods continues to be challenging for pharmacies. Objective: To describe a comprehensive, real-time pharmacy dashboard that incorporated LEAN methodologies and evaluate its utilization in an inpatient Central Intravenous Additives Services (CIVAS) pharmacy. Methods: Long Beach Memorial Hospital (462 adult beds) and Miller Children's and Women's Hospital of Long Beach (combined 324 beds) are tertiary not-for-profit, community-based hospitals that are served by one CIVAS pharmacy. Metrics to evaluate the effectiveness of CIVAS were developed and implemented on a dashboard in real-time from March 2013 to March 2014. Results: The metrics that were designed and implemented to evaluate the effectiveness of CIVAS were quality and value, financial resilience, and the department's people and culture. Using a dashboard that integrated these metrics, the accuracy of manufacturing defect-free products was ≥99.9%, indicating excellent quality and value of CIVAS. The metric for financial resilience demonstrated a cost savings of $78,000 annually within pharmacy by eliminating the outsourcing of products. People and value metrics on the dashboard focused on standard work, with an overall 94.6% compliance to the workflow. Conclusion: A unique dashboard that incorporated metrics to monitor 3 important areas was successfully implemented to improve the effectiveness of CIVAS pharmacy. These metrics helped pharmacy to monitor progress in real-time, allowing attainment of production goals and fostering continuous quality improvement through LEAN work. PMID:28439134

  18. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    PubMed

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  19. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures

    PubMed Central

    2015-01-01

    Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa) at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements). Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA) revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services. PMID:26448652

  20. A Linear Algebra Measure of Cluster Quality.

    ERIC Educational Resources Information Center

    Mather, Laura A.

    2000-01-01

    Discussion of models for information retrieval focuses on an application of linear algebra to text clustering, namely, a metric for measuring cluster quality based on the theory that cluster quality is proportional to the number of terms that are disjoint across the clusters. Explains term-document matrices and clustering algorithms. (Author/LRW)

  1. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.

    PubMed

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Evaluating Quality Metrics and Cost After Discharge: A Population-based Cohort Study of Value in Health Care Following Elective Major Vascular Surgery.

    PubMed

    de Mestral, Charles; Salata, Konrad; Hussain, Mohamad A; Kayssi, Ahmed; Al-Omran, Mohammed; Roche-Nagle, Graham

    2018-04-18

    Early readmission to hospital after surgery is an omnipresent quality metric across surgical fields. We sought to understand the relative importance of hospital readmission among all health services received after hospital discharge. The aim of this study was to characterize 30-day postdischarge cost and risk of an emergency department (ED) visit, readmission, or death after hospitalization for elective major vascular surgery. This is a population-based retrospective cohort study of patients who underwent elective major vascular surgery - carotid endarterectomy, EVAR, open AAA repair, bypass for lower extremity peripheral arterial disease - in Ontario, Canada, between 2004 and 2015. The outcomes of interest included quality metrics - ED visit, readmission, death - and cost to the Ministry of Health, within 30 days of discharge. Costs after discharge included those attributable to hospital readmission, ED visits, rehab, physician billing, outpatient nursing and allied health care, medications, interventions, and tests. Multivariable regression models characterized the association of pre-discharge characteristics with the above-mentioned postdischarge quality metrics and cost. A total of 30,752 patients were identified. Within 30 days of discharge, 2588 (8.4%) patients were readmitted to hospital and 13 patients died (0.04%). Another 4145 (13.5%) patients visited an ED without requiring admission. Across all patients, over half of 30-day postdischarge costs were attributable to outpatient care. Patients at an increased risk of an ED visit, readmission, or death within 30 days of discharge differed from those patients with relatively higher 30-day costs. Events occurring outside the hospital setting should be integral to the evaluation of quality of care and cost after hospitalization for major vascular surgery.

  3. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.

  4. Metrics for Radiologists in the Era of Value-based Health Care Delivery.

    PubMed

    Sarwar, Ammar; Boland, Giles; Monks, Annamarie; Kruskal, Jonathan B

    2015-01-01

    Accelerated by the Patient Protection and Affordable Care Act of 2010, health care delivery in the United States is poised to move from a model that rewards the volume of services provided to one that rewards the value provided by such services. Radiology department operations are currently managed by an array of metrics that assess various departmental missions, but many of these metrics do not measure value. Regulators and other stakeholders also influence what metrics are used to assess medical imaging. Metrics such as the Physician Quality Reporting System are increasingly being linked to financial penalties. In addition, metrics assessing radiology's contribution to cost or outcomes are currently lacking. In fact, radiology is widely viewed as a contributor to health care costs without an adequate understanding of its contribution to downstream cost savings or improvement in patient outcomes. The new value-based system of health care delivery and reimbursement will measure a provider's contribution to reducing costs and improving patient outcomes with the intention of making reimbursement commensurate with adherence to these metrics. The authors describe existing metrics and their application to the practice of radiology, discuss the so-called value equation, and suggest possible metrics that will be useful for demonstrating the value of radiologists' services to their patients. (©)RSNA, 2015.

  5. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895

  6. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.

  7. A New Look at Data Usage by Using Metadata Attributes as Indicators of Data Quality

    NASA Technical Reports Server (NTRS)

    Won, Young-In; Wanchoo, Lalit; Behnke, Jeanne

    2016-01-01

    This study reviews the key metrics (users, distributed volume, and files) in multiple ways to gain an understanding of the significance of the metadata. Characterizing the usability of data by key metadata elements, such as discipline and study area, will assist in understanding how the user needs have evolved over time. The data usage pattern based on product level provides insight into the level of data quality. In addition, the data metrics by various services, such as the Open-source Project for a Network Data Access Protocol (OPeNDAP) and subsets, address how these services have extended the usage of data. Over-all, this study presents the usage of data and metadata by metrics analyses, which may assist data centers in better supporting the needs of the users.

  8. Defining quality metrics and improving safety and outcome in allergy care.

    PubMed

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  9. Hyperspectral face recognition using improved inter-channel alignment based on qualitative prediction models.

    PubMed

    Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki

    2016-11-28

    A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.

  10. The use of bibliometrics to measure research quality in UK higher education institutions.

    PubMed

    Adams, Jonathan

    2009-01-01

    Research assessment in the UK has evolved over a quarter of a century from a loosely structured, peer-review based process to one with a well understood data portfolio and assessment methodology. After 2008, the assessment process will shift again, to the use of indicators based largely on publication and citation data. These indicators will in part follow the format introduced in 2008, with a profiling of assessment outcomes at national and international levels. However, the shift from peer assessment to a quantitative methodology raises critical issues about which metrics are appropriate and informative and how such metrics should be managed to produce weighting factors for funding formulae. The link between publication metrics and other perceptions of research quality needs to be thoroughly tested and reviewed, and may be variable between disciplines. Many of the indicators that drop out of publication data are poorly linked to quality and should not be used at all. There are also issues about which publications are the correct base for assessment, which staff should be included in a review, how subjects should be structured and how the citation data should be normalised to account for discipline-dependent variables. Finally, it is vital to consider the effect that any assessment process will have on the behaviour of those to be assessed.

  11. Fish community-based measures of estuarine ecological quality and pressure-impact relationships

    NASA Astrophysics Data System (ADS)

    Fonseca, Vanessa F.; Vasconcelos, Rita P.; Gamito, Rita; Pasquaud, Stéphanie; Gonçalves, Catarina I.; Costa, José L.; Costa, Maria J.; Cabral, Henrique N.

    2013-12-01

    Community-based responses of fish fauna to anthropogenic pressures have been extensively used to assess the ecological quality of estuarine ecosystems. Several methodologies have been developed recently combining metrics reflecting community structure and function. A fish community facing significant environmental disturbances will be characterized by a simplified structure, with lower diversity and complexity. However, estuaries are naturally dynamic ecosystems exposed to numerous human pressures, making it difficult to distinguish between natural and anthropogenic-induced changes to the biological community. In the present work, the variability of several fish metrics was assessed in relation to different pressures in estuarine sites. The response of a multimetric index (Estuarine Fish Assessment Index) was also analysed. Overall, fish metrics and the multimetric index signalled anthropogenic stress, particularly environmental chemical pollution. The fish assemblage associated with this type of pressure was characterized by lower species diversity, lower number of functional guilds, lower abundance of marine migrants and of piscivorous individuals, and higher abundance of estuarine resident species. A decreased ecological quality status, based on the EFAI, was also determined for sites associated with this pressure group. Ultimately, the definition of each pressure groups favoured a stressor-specific analysis, evidencing pressure patterns and accounting for multiple factors in a highly dynamic environment.

  12. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    PubMed

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  13. The use of vision-based image quality metrics to predict low-light performance of camera phones

    NASA Astrophysics Data System (ADS)

    Hultgren, B.; Hertel, D.

    2010-01-01

    Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.

  14. Image sharpness assessment based on wavelet energy of edge area

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Hong; Zhang, Lei; Yang, Yifan; He, Lei; Sun, Mingui

    2018-04-01

    Image quality assessment is needed in multiple image processing areas and blur is one of the key reasons of image deterioration. Although great full-reference image quality assessment metrics have been proposed in the past few years, no-reference method is still an area of current research. Facing this problem, this paper proposes a no-reference sharpness assessment method based on wavelet transformation which focuses on the edge area of image. Based on two simple characteristics of human vision system, weights are introduced to calculate weighted log-energy of each wavelet sub band. The final score is given by the ratio of high-frequency energy to the total energy. The algorithm is tested on multiple databases. Comparing with several state-of-the-art metrics, proposed algorithm has better performance and less runtime consumption.

  15. Physician-Pharmacist collaboration in a pay for performance healthcare environment.

    PubMed

    Farley, T M; Izakovic, M

    2015-01-01

    Healthcare is becoming more complex and costly in both European (Slovak) and American models. Healthcare in the United States (U.S.) is undergoing a particularly dramatic change. Physician and hospital reimbursement are becoming less procedure focused and increasingly outcome focused. Efforts at Mercy Hospital have shown promise in terms of collaborative team based care improving performance on glucose control outcome metrics, linked to reimbursement. Our performance on the Centers for Medicare and Medicaid Services (CMS) post-operative glucose control metric for cardiac surgery patients increased from a 63.6% pass rate to a 95.1% pass rate after implementing interventions involving physician-pharmacist team based care.Having a multidisciplinary team that is able to adapt quickly to changing expectations in the healthcare environment has aided our institution. As healthcare becomes increasingly saturated with technology, data and quality metrics, collaborative efforts resulting in increased quality and physician efficiency are desirable. Multidisciplinary collaboration (including physician-pharmacist collaboration) appears to be a viable route to improved performance in an outcome based healthcare system (Fig. 2, Ref. 12).

  16. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    PubMed

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were able to be developed for use in the ACC's quality efforts for ambulatory practice. © 2017 Wiley Periodicals, Inc.

  17. HealthTrust: A Social Network Approach for Retrieving Online Health Videos

    PubMed Central

    Karlsen, Randi; Melton, Genevieve B

    2012-01-01

    Background Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. Objectives To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. Methods We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. Results HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust’s filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r 10 = .65, P = .02) and a trend toward significance with health consumers (r 7 = .65, P = .06) with videos on hemoglobinA1 c, but it did not perform as well with diabetic foot videos. Conclusions The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities. PMID:22356723

  18. Colonoscopy Quality: Metrics and Implementation

    PubMed Central

    Calderwood, Audrey H.; Jacobson, Brian C.

    2013-01-01

    Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862

  19. Piloted Simulation Study of Rudder Pedal Force/Feel Characteristics

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    2007-01-01

    A piloted, fixed-base simulation was conducted in 2006 to determine optimum rudder pedal force/feel characteristics for transport aircraft. As part of this research, an evaluation of four metrics for assessing rudder pedal characteristics previously presented in the literature was conducted. This evaluation was based upon the numerical handling qualities ratings assigned to a variety of pedal force/feel systems used in the simulation study. It is shown that, with the inclusion of a fifth metric, most of the rudder pedal force/feel system designs that were rated poorly by the evaluation pilots could be identified. It is suggested that these metrics form the basis of a certification requirement for transport aircraft.

  20. Relating landscape characteristics to non-point source pollution in mine waste-located watersheds using geospatial techniques.

    PubMed

    Xiao, Huaguo; Ji, Wei

    2007-01-01

    Landscape characteristics of a watershed are important variables that influence surface water quality. Understanding the relationship between these variables and surface water quality is critical in predicting pollution potential and developing watershed management practices to eliminate or reduce pollution risk. To understand the impacts of landscape characteristics on water quality in mine waste-located watersheds, we conducted a case study in the Tri-State Mining District which is located in the conjunction of three states (Missouri, Kansas and Oklahoma). Severe heavy metal pollution exists in that area resulting from historical mining activities. We characterized land use/land cover over the last three decades by classifying historical multi-temporal Landsat imagery. Landscape metrics such as proportion, edge density and contagion were calculated based on the classified imagery. In-stream water quality data over three decades were collected, including lead, zinc, iron, cadmium, aluminum and conductivity which were used as key water quality indicators. Statistical analyses were performed to quantify the relationship between landscape metrics and surface water quality. Results showed that landscape characteristics in mine waste-located watersheds could account for as much as 77% of the variation of water quality indicators. A single landscape metric alone, such as proportion of mine waste area, could be used to predict surface water quality; but its predicting power is limited, usually accounting for less than 60% of the variance of water quality indicators.

  1. Impact of landscape disturbance on the quality of terrestrial sediment carbon in temperate streams

    NASA Astrophysics Data System (ADS)

    Fox, James F.; Ford, William I.

    2016-09-01

    Recent studies have shown the super saturation of fluvial networks with respect to carbon dioxide, and the concept that the high carbon dioxide is at least partially the result of turnover of sediment organic carbon that ranges in age from years to millennia. Currently, there is a need for more highly resolved studies at stream and river scales that enable estimates of terrestrial carbon turnover within fluvial networks. Our objective was to develop a new isotope-based metric to estimate the quality of sediment organic carbon delivered to temperate streams and to use the new metric to estimate carbon quality across landscape disturbance gradients. Carbon quality is defined to be consistent with in-stream turnover and our metric is used to measure the labile or recalcitrant nature of the terrestrial-derived carbon within streams. Our hypothesis was that intensively-disturbed landscapes would tend to produce low quality carbon because deep, recalcitrant soil carbon would be eroded and transported to the fluvial system while moderately disturbed or undisturbed landscapes would tend to produce higher quality carbon from well-developed surface soils and litter. The hypothesis was tested by applying the new carbon quality metric to 15 temperate streams with a wide range of landscape disturbance levels. We find that our hypothesis premised on an indirect relationship between the extent of landscape disturbance and the quality of sediment carbon in streams holds true for moderate and high disturbances but not for un-disturbed forests. We explain the results based on the connectivity, or dis-connectivity, between terrestrial carbon sources and pathways for sediment transport. While pathways are typically un-limited for disturbed landscapes, the un-disturbed forests have dis-connectivity between labile carbon of the forest floor and the stream corridor. Only in the case when trees fell into the stream corridor due to severe ice storms did the quality of sediment carbon increase in the streams. We argue that as scientists continue to estimate the in-stream turnover of terrestrially-derived carbon in fluvial carbon budgets, the assumption of pathway connectivity between carbon sources to the stream should be justified.

  2. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathiaseelan, V; Thomadsen, B

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure themore » delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability metrics: Different cultures/practices affecting the effectiveness of methods and metrics. Show examples of quality assurance workflows, Statistical process control, that monitor the treatment planning and delivery process to identify errors. To learn to identify and prioritize risks and QA procedures in radiation oncology. Try to answer the question: Can a quality assurance program aided by quality assurance metrics help minimize errors and ensure safe treatment delivery. Should such metrics be institution specific.« less

  3. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  4. Questionable validity of the catheter-associated urinary tract infection metric used for value-based purchasing.

    PubMed

    Calderon, Lindsay E; Kavanagh, Kevin T; Rice, Mara K

    2015-10-01

    Catheter-associated urinary tract infections (CAUTIs) occur in 290,000 US hospital patients annually, with an estimated cost of $290 million. Two different measurement systems are being used to track the US health care system's performance in lowering the rate of CAUTIs. Since 2010, the Agency for Healthcare Research and Quality (AHRQ) metric has shown a 28.2% decrease in CAUTI, whereas the Centers for Disease Control and Prevention metric has shown a 3%-6% increase in CAUTI since 2009. Differences in data acquisition and the definition of the denominator may explain this discrepancy. The AHRQ metric analyzes chart-audited data and reflects both catheter use and care. The Centers for Disease Control and Prevention metric analyzes self-reported data and primarily reflects catheter care. Because analysis of the AHRQ metric showed a progressive change in performance over time and the scientific literature supports the importance of catheter use in the prevention of CAUTI, it is suggested that risk-adjusted catheter-use data be incorporated into metrics that are used for determining facility performance and for value-based purchasing initiatives. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  5. Robust and transferable quantification of NMR spectral quality using IROC analysis

    NASA Astrophysics Data System (ADS)

    Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.

    2017-12-01

    Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.

  6. Guiding principles and checklist for population-based quality metrics.

    PubMed

    Krishnan, Mahesh; Brunelli, Steven M; Maddux, Franklin W; Parker, Thomas F; Johnson, Douglas; Nissenson, Allen R; Collins, Allan; Lacson, Eduardo

    2014-06-06

    The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. Copyright © 2014 by the American Society of Nephrology.

  7. Spatial Patterns in Water Quality Changes during Dredging in Tropical Environments

    PubMed Central

    Fisher, Rebecca; Stark, Clair; Ridd, Peter; Jones, Ross

    2015-01-01

    Dredging poses a potential risk to tropical ecosystems, especially in turbidity-sensitive environments such as coral reefs, filter feeding communities and seagrasses. There is little detailed observational time-series data on the spatial effects of dredging on turbidity and light and defining likely footprints is a fundamental task for impact prediction, the EIA process, and for designing monitoring projects when dredging is underway. It is also important for public perception of risks associated with dredging. Using an extensive collection of in situ water quality data (73 sites) from three recent large scale capital dredging programs in Australia, and which included extensive pre-dredging baseline data, we describe relationships with distance from dredging for a range of water quality metrics. Using a criterion to define a zone of potential impact of where the water quality value exceeds the 80th percentile of the baseline value for turbidity-based metrics or the 20th percentile for the light based metrics, effects were observed predominantly up to three km from dredging, but in one instance up to nearly 20 km. This upper (~20 km) limit was unusual and caused by a local oceanographic feature of consistent unidirectional flow during the project. Water quality loggers were located along the principal axis of this flow (from 200 m to 30 km) and provided the opportunity to develop a matrix of exposure based on running means calculated across multiple time periods (from hours to one month) and distance from the dredging, and summarized across a broad range of percentile values. This information can be used to more formally develop water quality thresholds for benthic organisms, such as corals, filter-feeders (e.g. sponges) and seagrasses in future laboratory- and field-based studies using environmentally realistic and relevant exposure scenarios, that may be used to further refine distance based analyses of impact, potentially further reducing the size of the dredging footprint. PMID:26630575

  8. Spatial Patterns in Water Quality Changes during Dredging in Tropical Environments.

    PubMed

    Fisher, Rebecca; Stark, Clair; Ridd, Peter; Jones, Ross

    2015-01-01

    Dredging poses a potential risk to tropical ecosystems, especially in turbidity-sensitive environments such as coral reefs, filter feeding communities and seagrasses. There is little detailed observational time-series data on the spatial effects of dredging on turbidity and light and defining likely footprints is a fundamental task for impact prediction, the EIA process, and for designing monitoring projects when dredging is underway. It is also important for public perception of risks associated with dredging. Using an extensive collection of in situ water quality data (73 sites) from three recent large scale capital dredging programs in Australia, and which included extensive pre-dredging baseline data, we describe relationships with distance from dredging for a range of water quality metrics. Using a criterion to define a zone of potential impact of where the water quality value exceeds the 80th percentile of the baseline value for turbidity-based metrics or the 20th percentile for the light based metrics, effects were observed predominantly up to three km from dredging, but in one instance up to nearly 20 km. This upper (~20 km) limit was unusual and caused by a local oceanographic feature of consistent unidirectional flow during the project. Water quality loggers were located along the principal axis of this flow (from 200 m to 30 km) and provided the opportunity to develop a matrix of exposure based on running means calculated across multiple time periods (from hours to one month) and distance from the dredging, and summarized across a broad range of percentile values. This information can be used to more formally develop water quality thresholds for benthic organisms, such as corals, filter-feeders (e.g. sponges) and seagrasses in future laboratory- and field-based studies using environmentally realistic and relevant exposure scenarios, that may be used to further refine distance based analyses of impact, potentially further reducing the size of the dredging footprint.

  9. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  10. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    PubMed

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  11. Evaluation of image quality metrics for the prediction of subjective best focus.

    PubMed

    Kilintari, Marina; Pallikaris, Aristophanis; Tsiklis, Nikolaos; Ginis, Harilaos S

    2010-03-01

    Seven existing and three new image quality metrics were evaluated in terms of their effectiveness in predicting subjective cycloplegic refraction. Monochromatic wavefront aberrations (WA) were measured in 70 eyes using a Shack-Hartmann based device (Complete Ophthalmic Analysis System; Wavefront Sciences). Subjective cycloplegic spherocylindrical correction was obtained using a standard manifest refraction procedure. The dioptric amount required to optimize each metric was calculated and compared with the subjective refraction result. Metrics included monochromatic and polychromatic variants, as well as variants taking into consideration the Stiles and Crawford effect (SCE). WA measurements were performed using infrared light and converted to visible before all calculations. The mean difference between subjective cycloplegic and WA-derived spherical refraction ranged from 0.17 to 0.36 diopters (D), while paraxial curvature resulted in a difference of 0.68 D. Monochromatic metrics exhibited smaller mean differences between subjective cycloplegic and objective refraction. Consideration of the SCE reduced the standard deviation (SD) of the difference between subjective and objective refraction. All metrics exhibited similar performance in terms of accuracy and precision. We hypothesize that errors pertaining to the conversion between infrared and visible wavelengths rather than calculation method may be the limiting factor in determining objective best focus from near infrared WA measurements.

  12. Geographic techniques and recent applications of remote sensing to landscape-water quality studies

    USGS Publications Warehouse

    Griffith, J.A.

    2002-01-01

    This article overviews recent advances in studies of landscape-water quality relationships using remote sensing techniques. With the increasing feasibility of using remotely-sensed data, landscape-water quality studies can now be more easily performed on regional, multi-state scales. The traditional method of relating land use and land cover to water quality has been extended to include landscape pattern and other landscape information derived from satellite data. Three items are focused on in this article: 1) the increasing recognition of the importance of larger-scale studies of regional water quality that require a landscape perspective; 2) the increasing importance of remotely sensed data, such as the imagery-derived normalized difference vegetation index (NDVI) and vegetation phenological metrics derived from time-series NDVI data; and 3) landscape pattern. In some studies, using landscape pattern metrics explained some of the variation in water quality not explained by land use/cover. However, in some other studies, the NDVI metrics were even more highly correlated to certain water quality parameters than either landscape pattern metrics or land use/cover proportions. Although studies relating landscape pattern metrics to water quality have had mixed results, this recent body of work applying these landscape measures and satellite-derived metrics to water quality analysis has demonstrated their potential usefulness in monitoring watershed conditions across large regions.

  13. Landscape pattern metrics and regional assessment

    USGS Publications Warehouse

    O'Neill, R. V.; Riitters, K.H.; Wickham, J.D.; Jones, K.B.

    1999-01-01

    The combination of remote imagery data, geographic information systems software, and landscape ecology theory provides a unique basis for monitoring and assessing large-scale ecological systems. The unique feature of the work has been the need to develop and interpret quantitative measures of spatial pattern-the landscape indices. This article reviews what is known about the statistical properties of these pattern metrics and suggests some additional metrics based on island biogeography, percolation theory, hierarchy theory, and economic geography. Assessment applications of this approach have required interpreting the pattern metrics in terms of specific environmental endpoints, such as wildlife and water quality, and research into how to represent synergystic effects of many overlapping sources of stress.

  14. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    PubMed

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  15. Performance Metrics for Liquid Chromatography-Tandem Mass Spectrometry Systems in Proteomics Analyses*

    PubMed Central

    Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.

    2010-01-01

    A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981

  16. Experimental evaluation of ontology-based HIV/AIDS frequently asked question retrieval system.

    PubMed

    Ayalew, Yirsaw; Moeng, Barbara; Mosweunyane, Gontlafetse

    2018-05-01

    This study presents the results of experimental evaluations of an ontology-based frequently asked question retrieval system in the domain of HIV and AIDS. The main purpose of the system is to provide answers to questions on HIV/AIDS using ontology. To evaluate the effectiveness of the frequently asked question retrieval system, we conducted two experiments. The first experiment focused on the evaluation of the quality of the ontology we developed using the OQuaRE evaluation framework which is based on software quality metrics and metrics designed for ontology quality evaluation. The second experiment focused on evaluating the effectiveness of the ontology in retrieving relevant answers. For this we used an open-source information retrieval platform, Terrier, with retrieval models BM25 and PL2. For the measurement of performance, we used the measures mean average precision, mean reciprocal rank, and precision at 5. The results suggest that frequently asked question retrieval with ontology is more effective than frequently asked question retrieval without ontology in the domain of HIV/AIDS.

  17. Moss and vascular plant indices in Ohio wetlands have similar environmental predictors

    USGS Publications Warehouse

    Stapanian, Martin A.; Schumacher, William; Gara, Brian; Adams, Jean V.; Viau, Nick

    2016-01-01

    Mosses and vascular plants have been shown to be reliable indicators of wetland habitat delineation and environmental quality. Knowledge of the best ecological predictors of the quality of wetland moss and vascular plant communities may determine if similar management practices would simultaneously enhance both populations. We used Akaike's Information Criterion to identify models predicting a moss quality assessment index (MQAI) and a vascular plant index of biological integrity based on floristic quality (VIBI-FQ) from 27 emergent and 13 forested wetlands in Ohio, USA. The set of predictors included the six metrics from a wetlands disturbance index (ORAM) and two landscape development intensity indices (LDIs). The best single predictor of MQAI and one of the predictors of VIBI-FQ was an ORAM metric that assesses habitat alteration and disturbance within the wetland, such as mowing, grazing, and agricultural practices. However, the best single predictor of VIBI-FQ was an ORAM metric that assessed wetland vascular plant communities, interspersion, and microtopography. LDIs better predicted MQAI than VIBI-FQ, suggesting that mosses may either respond more rapidly to, or recover more slowly from, anthropogenic disturbance in the surrounding landscape than vascular plants. These results supported previous predictive studies on amphibian indices and metrics and a separate vegetation index, indicating that similar wetland management practices may result in qualitatively the same ecological response for three vastly different wetland biological communities (amphibians, vascular plants, and mosses).

  18. Software metrics: The key to quality software on the NCC project

    NASA Technical Reports Server (NTRS)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  19. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  20. An index of biological integrity for northern Mid-Atlantic Slope drainages

    USGS Publications Warehouse

    Daniels, R.A.; Riva-Murray, K.; Halliwell, D.B.; Vana-Miller, D. L.; Bilger, Michael D.

    2002-01-01

    An index of biological integrity (IBI) was developed for streams in the Hudson, Delaware, and Susquehanna River drainages in the northeastern United States based on fish assemblage data from the Mohawk River drainage of New York. The original IBI, developed for streams in the U.S. Midwest, was modified to reflect the assemblage composition and structure present in Mid-Atlantic Slope drainages. We replaced several of the Midwestern IBI metrics and criteria scores because fishes common to the Midwest are absent from or poorly represented in the Northeast and because stream fish assemblages in the Northeast are less rich than those in the Midwest. For all replacement metrics we followed the ecology-based rationale used in the development of each of the metrics of the Midwestern IBI so that the basic theoretical underpinnings of the IBI remained unchanged. The validity of this modified IBI is demonstrated by examining the quality of streams in the Hudson, Delaware, and lower Susquehanna River basins. The relationships between the IBI and other indicators of environmental quality are examined using data on assemblages of fish and benthic macroinvertebrates and on chemical and physical stream characteristics obtained during 1993-2000 by the U.S. Geological Survey's National Water Quality Assessment Program in these three river basins. A principal components analysis (PCA) of chemical and physical variables from 27 sites resulted in an environmental quality gradient as the primary PCA axis (eigenvalue, 0.41 ). Principal components analysis site scores were significantly correlated with such benthic macroinvertebrate metrics as the percentage of Ephemeroptera, Plecoptera, and Trichoptera taxa (Spearman R = -0.66, P < 0.001). Index of biological integrity scores for sites in these three river basins were significantly correlated with this environmental quality gradient (Spearman R = -0.78, P = 0.0001). The northern Mid-Atlantic Slope IBI appears to be sensitive to environmental degradation in all three of the river basins addressed in this study. Adjustment of metric scoring criteria may be warranted, depending on composition of fish species in streams in the study area and on the relative effort used in the collection of fish assemblage data.

  1. Measuring economic complexity of countries and products: which metric to use?

    NASA Astrophysics Data System (ADS)

    Mariani, Manuel Sebastian; Vidmer, Alexandre; Medo, Matsúš; Zhang, Yi-Cheng

    2015-11-01

    Evaluating the economies of countries and their relations with products in the global market is a central problem in economics, with far-reaching implications to our theoretical understanding of the international trade as well as to practical applications, such as policy making and financial investment planning. The recent Economic Complexity approach aims to quantify the competitiveness of countries and the quality of the exported products based on the empirical observation that the most competitive countries have diversified exports, whereas developing countries only export few low quality products - typically those exported by many other countries. Two different metrics, Fitness-Complexity and the Method of Reflections, have been proposed to measure country and product score in the Economic Complexity framework. We use international trade data and a recent ranking evaluation measure to quantitatively compare the ability of the two metrics to rank countries and products according to their importance in the network. The results show that the Fitness-Complexity metric outperforms the Method of Reflections in both the ranking of products and the ranking of countries. We also investigate a generalization of the Fitness-Complexity metric and show that it can produce improved rankings provided that the input data are reliable.

  2. Applying Sigma Metrics to Reduce Outliers.

    PubMed

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Quality Measures in Stroke

    PubMed Central

    Poisson, Sharon N.; Josephson, S. Andrew

    2011-01-01

    Stroke is a major public health burden, and accounts for many hospitalizations each year. Due to gaps in practice and recommended guidelines, there has been a recent push toward implementing quality measures to be used for improving patient care, comparing institutions, as well as for rewarding or penalizing physicians through pay-for-performance. This article reviews the major organizations involved in implementing quality metrics for stroke, and the 10 major metrics currently being tracked. We also discuss possible future metrics and the implications of public reporting and using metrics for pay-for-performance. PMID:23983840

  4. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory

    PubMed Central

    Kumar, B. Vinodh; Mohan, Thuthi

    2018-01-01

    OBJECTIVE: Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. MATERIALS AND METHODS: This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. RESULTS: For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. CONCLUSION: This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes. PMID:29692587

  5. Using Qualitative and Quantitative Methods to Choose a Habitat Quality Metric for Air Pollution Policy Evaluation

    PubMed Central

    Ford, Adriana E. S.; Smart, Simon M.; Henrys, Peter A.; Ashmore, Mike R.

    2016-01-01

    Atmospheric nitrogen (N) deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as ‘positive indicator-species’ (not particularly scarce, but distinctive for the habitat) were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists’ assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel combined qualitative and quantitative approach taken to elucidate the priorities of conservation professionals could be usefully applied in other contexts. PMID:27557277

  6. Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models

    NASA Astrophysics Data System (ADS)

    Wong, K.; Ellul, C.

    2016-10-01

    Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.

  7. Software Quality Metrics Enhancements. Volume 1

    DTIC Science & Technology

    1980-04-01

    the mathematical relationships which relate metrics to ratings of the various quality factors) for factors which were not validated previously were...function, provides a mathematical relationship between the metrics and the quality factors. (3) Validation of these normalization functions was performed by...samples, further research is needed before a high degree of confidence can be placed on the mathematical relationships established to date l (3.3.3) 6

  8. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems

    PubMed Central

    Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.

    2015-01-01

    Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086

  9. ChiLin: a comprehensive ChIP-seq and DNase-seq quality control and analysis pipeline.

    PubMed

    Qin, Qian; Mei, Shenglin; Wu, Qiu; Sun, Hanfei; Li, Lewyn; Taing, Len; Chen, Sujun; Li, Fugen; Liu, Tao; Zang, Chongzhi; Xu, Han; Chen, Yiwen; Meyer, Clifford A; Zhang, Yong; Brown, Myles; Long, Henry W; Liu, X Shirley

    2016-10-03

    Transcription factor binding, histone modification, and chromatin accessibility studies are important approaches to understanding the biology of gene regulation. ChIP-seq and DNase-seq have become the standard techniques for studying protein-DNA interactions and chromatin accessibility respectively, and comprehensive quality control (QC) and analysis tools are critical to extracting the most value from these assay types. Although many analysis and QC tools have been reported, few combine ChIP-seq and DNase-seq data analysis and quality control in a unified framework with a comprehensive and unbiased reference of data quality metrics. ChiLin is a computational pipeline that automates the quality control and data analyses of ChIP-seq and DNase-seq data. It is developed using a flexible and modular software framework that can be easily extended and modified. ChiLin is ideal for batch processing of many datasets and is well suited for large collaborative projects involving ChIP-seq and DNase-seq from different designs. ChiLin generates comprehensive quality control reports that include comparisons with historical data derived from over 23,677 public ChIP-seq and DNase-seq samples (11,265 datasets) from eight literature-based classified categories. To the best of our knowledge, this atlas represents the most comprehensive ChIP-seq and DNase-seq related quality metric resource currently available. These historical metrics provide useful heuristic quality references for experiment across all commonly used assay types. Using representative datasets, we demonstrate the versatility of the pipeline by applying it to different assay types of ChIP-seq data. The pipeline software is available open source at https://github.com/cfce/chilin . ChiLin is a scalable and powerful tool to process large batches of ChIP-seq and DNase-seq datasets. The analysis output and quality metrics have been structured into user-friendly directories and reports. We have successfully compiled 23,677 profiles into a comprehensive quality atlas with fine classification for users.

  10. An Extension of BLANC to System Mentions.

    PubMed

    Luo, Xiaoqiang; Pradhan, Sameer; Recasens, Marta; Hovy, Eduard

    2014-06-01

    BLANC is a link-based coreference evaluation metric for measuring the quality of coreference systems on gold mentions. This paper extends the original BLANC ("BLANC-gold" henceforth) to system mentions, removing the gold mention assumption. The proposed BLANC falls back seamlessly to the original one if system mentions are identical to gold mentions, and it is shown to strongly correlate with existing metrics on the 2011 and 2012 CoNLL data.

  11. Macroinvertebrate and diatom metrics as indicators of water-quality conditions in connected depression wetlands in the Mississippi Alluvial Plain

    USGS Publications Warehouse

    Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer

    2016-01-01

    Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.

  12. Working toward quality in obstetric anesthesia: a business approach.

    PubMed

    Lynde, Grant C

    2017-06-01

    Physicians are increasingly required to demonstrate that they provide quality care. How does one define quality? A significant body of literature in industries outside of health care provides guidance on how to define appropriate metrics, create teams to troubleshoot problem areas, and sustain those improvements. The modern quality movement in the United States began in response to revolutionary gains in both quality and productivity in Japanese manufacturing in the 1980's. Applying these lessons to the healthcare setting has been slow. Hospitals are only now introducing tools such as failure mode and effect analysis, Lean and Six Sigma into their quality divisions and are seeing significant cost reductions and outcomes improvements. The review will discuss the process for creating an effective quality program for an obstetric anesthesia division. Sustainable improvements in delivered care need to be based on an evaluation of service line needs, defining appropriate metrics, understanding current process flows, changing and measuring those processes, and developing mechanisms to ensure the new processes are maintained.

  13. Feasibility of and Rationale for the Collection of Orthopaedic Trauma Surgery Quality of Care Metrics.

    PubMed

    Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip

    2017-06-01

    Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.

  14. Hybrid Air Quality Modeling Approach for use in the Hear-road Exposures to Urban air pollutant Study(NEXUS)

    EPA Science Inventory

    The paper presents a hybrid air quality modeling approach and its application in NEXUS in order to provide spatial and temporally varying exposure estimates and identification of the mobile source contribution to the total pollutant exposure. Model-based exposure metrics, associa...

  15. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  16. Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media

    NASA Astrophysics Data System (ADS)

    Park, Ju-Won; Kim, JongWon

    2004-10-01

    As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.

  17. Can segmentation evaluation metric be used as an indicator of land cover classification accuracy?

    NASA Astrophysics Data System (ADS)

    Švab Lenarčič, Andreja; Đurić, Nataša; Čotar, Klemen; Ritlop, Klemen; Oštir, Krištof

    2016-10-01

    It is a broadly established belief that the segmentation result significantly affects subsequent image classification accuracy. However, the actual correlation between the two has never been evaluated. Such an evaluation would be of considerable importance for any attempts to automate the object-based classification process, as it would reduce the amount of user intervention required to fine-tune the segmentation parameters. We conducted an assessment of segmentation and classification by analyzing 100 different segmentation parameter combinations, 3 classifiers, 5 land cover classes, 20 segmentation evaluation metrics, and 7 classification accuracy measures. The reliability definition of segmentation evaluation metrics as indicators of land cover classification accuracy was based on the linear correlation between the two. All unsupervised metrics that are not based on number of segments have a very strong correlation with all classification measures and are therefore reliable as indicators of land cover classification accuracy. On the other hand, correlation at supervised metrics is dependent on so many factors that it cannot be trusted as a reliable classification quality indicator. Algorithms for land cover classification studied in this paper are widely used; therefore, presented results are applicable to a wider area.

  18. The Impact of Quality Assurance Assessment on Diffusion Tensor Imaging Outcomes in a Large-Scale Population-Based Cohort

    PubMed Central

    Roalf, David R.; Quarmley, Megan; Elliott, Mark A.; Satterthwaite, Theodore D.; Vandekar, Simon N.; Ruparel, Kosha; Gennatas, Efstathios D.; Calkins, Monica E.; Moore, Tyler M.; Hopson, Ryan; Prabhakaran, Karthik; Jackson, Chad T.; Verma, Ragini; Hakonarson, Hakon; Gur, Ruben C.; Gur, Raquel E.

    2015-01-01

    Background Diffusion tensor imaging (DTI) is applied in investigation of brain biomarkers for neurodevelopmental and neurodegenerative disorders. However, the quality of DTI measurements, like other neuroimaging techniques, is susceptible to several confounding factors (e.g. motion, eddy currents), which have only recently come under scrutiny. These confounds are especially relevant in adolescent samples where data quality may be compromised in ways that confound interpretation of maturation parameters. The current study aims to leverage DTI data from the Philadelphia Neurodevelopmental Cohort (PNC), a sample of 1,601 youths ages of 8–21 who underwent neuroimaging, to: 1) establish quality assurance (QA) metrics for the automatic identification of poor DTI image quality; 2) examine the performance of these QA measures in an external validation sample; 3) document the influence of data quality on developmental patterns of typical DTI metrics. Methods All diffusion-weighted images were acquired on the same scanner. Visual QA was performed on all subjects completing DTI; images were manually categorized as Poor, Good, or Excellent. Four image quality metrics were automatically computed and used to predict manual QA status: Mean voxel intensity outlier count (MEANVOX), Maximum voxel intensity outlier count (MAXVOX), mean relative motion (MOTION) and temporal signal-to-noise ratio (TSNR). Classification accuracy for each metric was calculated as the area under the receiver-operating characteristic curve (AUC). A threshold was generated for each measure that best differentiated visual QA status and applied in a validation sample. The effects of data quality on sensitivity to expected age effects in this developmental sample were then investigated using the traditional MRI diffusion metrics: fractional anisotropy (FA) and mean diffusivity (MD). Finally, our method of QA is compared to DTIPrep. Results TSNR (AUC=0.94) best differentiated Poor data from Good and Excellent data. MAXVOX (AUC=0.88) best differentiated Good from Excellent DTI data. At the optimal threshold, 88% of Poor data and 91% Good/Excellent data were correctly identified. Use of these thresholds on a validation dataset (n=374) indicated high accuracy. In the validation sample 83% of Poor data and 94% of Excellent data was identified using thresholds derived from the training sample. Both FA and MD were affected by the inclusion of poor data in an analysis of age, sex and race in a matched comparison sample. In addition, we show that the inclusion of poor data results in significant attenuation of the correlation between diffusion metrics (FA and MD) and age during a critical neurodevelopmental period. We find higher correspondence between our QA method and DTIPrep for Poor data, but we find our method to be more robust for apparently high-quality images. Conclusion Automated QA of DTI can facilitate large-scale, high-throughput quality assurance by reliably identifying both scanner and subject induced imaging artifacts. The results present a practical example of the confounding effects of artifacts on DTI analysis in a large population-based sample, and suggest that estimates of data quality should not only be reported but also accounted for in data analysis, especially in studies of development. PMID:26520775

  19. Stochastic HKMDHE: A multi-objective contrast enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.

  20. Validating and improving CT ventilation imaging by correlating with ventilation 4D-PET/CT using {sup 68}Ga-labeled nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kipritidis, John, E-mail: john.kipritidis@sydney.edu.au; Keall, Paul J.; Siva, Shankar

    Purpose: CT ventilation imaging is a novel functional lung imaging modality based on deformable image registration. The authors present the first validation study of CT ventilation using positron emission tomography with{sup 68}Ga-labeled nanoparticles (PET-Galligas). The authors quantify this agreement for different CT ventilation metrics and PET reconstruction parameters. Methods: PET-Galligas ventilation scans were acquired for 12 lung cancer patients using a four-dimensional (4D) PET/CT scanner. CT ventilation images were then produced by applying B-spline deformable image registration between the respiratory correlated phases of the 4D-CT. The authors test four ventilation metrics, two existing and two modified. The two existing metricsmore » model mechanical ventilation (alveolar air-flow) based on Hounsfield unit (HU) change (V{sub HU}) or Jacobian determinant of deformation (V{sub Jac}). The two modified metrics incorporate a voxel-wise tissue-density scaling (ρV{sub HU} and ρV{sub Jac}) and were hypothesized to better model the physiological ventilation. In order to assess the impact of PET image quality, comparisons were performed using both standard and respiratory-gated PET images with the former exhibiting better signal. Different median filtering kernels (σ{sub m} = 0 or 3 mm) were also applied to all images. As in previous studies, similarity metrics included the Spearman correlation coefficient r within the segmented lung volumes, and Dice coefficient d{sub 20} for the (0 − 20)th functional percentile volumes. Results: The best agreement between CT and PET ventilation was obtained comparing standard PET images to the density-scaled HU metric (ρV{sub HU}) with σ{sub m} = 3 mm. This leads to correlation values in the ranges 0.22 ⩽ r ⩽ 0.76 and 0.38 ⩽ d{sub 20} ⩽ 0.68, with r{sup ¯}=0.42±0.16 and d{sup ¯}{sub 20}=0.52±0.09 averaged over the 12 patients. Compared to Jacobian-based metrics, HU-based metrics lead to statistically significant improvements in r{sup ¯} and d{sup ¯}{sub 20} (p < 0.05), with density scaled metrics also showing higher r{sup ¯} than for unscaled versions (p < 0.02). r{sup ¯} and d{sup ¯}{sub 20} were also sensitive to image quality, with statistically significant improvements using standard (as opposed to gated) PET images and with application of median filtering. Conclusions: The use of modified CT ventilation metrics, in conjunction with PET-Galligas and careful application of image filtering has resulted in improved correlation compared to earlier studies using nuclear medicine ventilation. However, CT ventilation and PET-Galligas do not always provide the same functional information. The authors have demonstrated that the agreement can improve for CT ventilation metrics incorporating a tissue density scaling, and also with increasing PET image quality. CT ventilation imaging has clear potential for imaging regional air volume change in the lung, and further development is warranted.« less

  1. FQC Dashboard: integrates FastQC results into a web-based, interactive, and extensible FASTQ quality control tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Joseph; Pirrung, Meg; McCue, Lee Ann

    FQC is software that facilitates large-scale quality control of FASTQ files by carrying out a QC protocol, parsing results, and aggregating quality metrics within and across experiments into an interactive dashboard. The dashboard utilizes human-readable configuration files to manipulate the pages and tabs, and is extensible with CSV data.

  2. Driving photomask supplier quality through automation

    NASA Astrophysics Data System (ADS)

    Russell, Drew; Espenscheid, Andrew

    2007-10-01

    In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.

  3. Figure of merit for macrouniformity based on image quality ruler evaluation and machine learning framework

    NASA Astrophysics Data System (ADS)

    Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.

    2013-01-01

    Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.

  4. Development and application of a novel metric to assess effectiveness of biomedical data

    PubMed Central

    Bloom, Gregory C; Eschrich, Steven; Hang, Gang; Schabath, Matthew B; Bhansali, Neera; Hoerter, Andrew M; Morgan, Scott; Fenstermacher, David A

    2013-01-01

    Objective Design a metric to assess the comparative effectiveness of biomedical data elements within a study that incorporates their statistical relatedness to a given outcome variable as well as a measurement of the quality of their underlying data. Materials and methods The cohort consisted of 874 patients with adenocarcinoma of the lung, each with 47 clinical data elements. The p value for each element was calculated using the Cox proportional hazard univariable regression model with overall survival as the endpoint. An attribute or A-score was calculated by quantification of an element's four quality attributes; Completeness, Comprehensiveness, Consistency and Overall-cost. An effectiveness or E-score was obtained by calculating the conditional probabilities of the p-value and A-score within the given data set with their product equaling the effectiveness score (E-score). Results The E-score metric provided information about the utility of an element beyond an outcome-related p value ranking. E-scores for elements age-at-diagnosis, gender and tobacco-use showed utility above what their respective p values alone would indicate due to their relative ease of acquisition, that is, higher A-scores. Conversely, elements surgery-site, histologic-type and pathological-TNM stage were down-ranked in comparison to their p values based on lower A-scores caused by significantly higher acquisition costs. Conclusions A novel metric termed E-score was developed which incorporates standard statistics with data quality metrics and was tested on elements from a large lung cohort. Results show that an element's underlying data quality is an important consideration in addition to p value correlation to outcome when determining the element's clinical or research utility in a study. PMID:23975264

  5. Introduction to the special collection of papers on the San Luis Basin Sustainability Metrics Project: a methodology for evaluating regional sustainability.

    PubMed

    Heberling, Matthew T; Hopton, Matthew E

    2012-11-30

    This paper introduces a collection of four articles describing the San Luis Basin Sustainability Metrics Project. The Project developed a methodology for evaluating regional sustainability. This introduction provides the necessary background information for the project, description of the region, overview of the methods, and summary of the results. Although there are a multitude of scientifically based sustainability metrics, many are data intensive, difficult to calculate, and fail to capture all aspects of a system. We wanted to see if we could develop an approach that decision-makers could use to understand if their system was moving toward or away from sustainability. The goal was to produce a scientifically defensible, but straightforward and inexpensive methodology to measure and monitor environmental quality within a regional system. We initiated an interdisciplinary pilot project in the San Luis Basin, south-central Colorado, to test the methodology. The objectives were: 1) determine the applicability of using existing datasets to estimate metrics of sustainability at a regional scale; 2) calculate metrics through time from 1980 to 2005; and 3) compare and contrast the results to determine if the system was moving toward or away from sustainability. The sustainability metrics, chosen to represent major components of the system, were: 1) Ecological Footprint to capture the impact and human burden on the system; 2) Green Net Regional Product to represent economic welfare; 3) Emergy to capture the quality-normalized flow of energy through the system; and 4) Fisher information to capture the overall dynamic order and to look for possible regime changes. The methodology, data, and results of each metric are presented in the remaining four papers of the special collection. Based on the results of each metric and our criteria for understanding the sustainability trends, we find that the San Luis Basin is moving away from sustainability. Although we understand there are strengths and limitations of the methodology, we argue that each metric identifies changes to major components of the system. Published by Elsevier Ltd.

  6. A Novel Scoring Metrics for Quality Assurance of Ocean Color Observations

    NASA Astrophysics Data System (ADS)

    Wei, J.; Lee, Z.

    2016-02-01

    Interpretation of the ocean bio-optical properties from ocean color observations depends on the quality of the ocean color data, specifically the spectrum of remote sensing reflectance (Rrs). The in situ and remotely measured Rrs spectra are inevitably subject to errors induced by instrument calibration, sea-surface correction and atmospheric correction, and other environmental factors. Great efforts have been devoted to the ocean color calibration and validation. Yet, there exist no objective and consensus criteria for assessment of the ocean color data quality. In this study, the gap is filled by developing a novel metrics for such data quality assurance and quality control (QA/QC). This new QA metrics is not intended to discard "suspicious" Rrs spectra from available datasets. Rather, it takes into account the Rrs spectral shapes and amplitudes as a whole and grades each Rrs spectrum. This scoring system is developed based on a large ensemble of in situ hyperspectral remote sensing reflectance data measured from various aquatic environments and processed with robust procedures. This system is further tested with the NASA bio-Optical Marine Algorithm Data set (NOMAD), with results indicating significant improvements in the estimation of bio-optical properties when Rrs spectra marked with higher quality assurance are used. This scoring system is further verified with simulated data and satellite ocean color data in various regions, and we envision higher quality ocean color products with the implementation of such a quality screening system.

  7. Comparing masked target transform volume (MTTV) clutter metric to human observer evaluation of visual clutter

    NASA Astrophysics Data System (ADS)

    Camp, H. A.; Moyer, Steven; Moore, Richard K.

    2010-04-01

    The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.

  8. Return on investment in healthcare leadership development programs.

    PubMed

    Jeyaraman, Maya M; Qadar, Sheikh Muhammad Zeeshan; Wierzbowski, Aleksandra; Farshidfar, Farnaz; Lys, Justin; Dickson, Graham; Grimes, Kelly; Phillips, Leah A; Mitchell, Jonathan I; Van Aerde, John; Johnson, Dave; Krupka, Frank; Zarychanski, Ryan; Abou-Setta, Ahmed M

    2018-02-05

    Purpose Strong leadership has been shown to foster change, including loyalty, improved performance and decreased error rates, but there is a dearth of evidence on effectiveness of leadership development programs. To ensure a return on the huge investments made, evidence-based approaches are needed to assess the impact of leadership on health-care establishments. As a part of a pan-Canadian initiative to design an effective evaluative instrument, the purpose of this paper was to identify and summarize evidence on health-care outcomes/return on investment (ROI) indicators and metrics associated with leadership quality, leadership development programs and existing evaluative instruments. Design/methodology/approach The authors performed a scoping review using the Arksey and O'Malley framework, searching eight databases from 2006 through June 2016. Findings Of 11,868 citations screened, the authors included 223 studies reporting on health-care outcomes/ROI indicators and metrics associated with leadership quality (73 studies), leadership development programs (138 studies) and existing evaluative instruments (12 studies). The extracted ROI indicators and metrics have been summarized in detail. Originality/value This review provides a snapshot in time of the current evidence on ROI indicators and metrics associated with leadership. Summarized ROI indicators and metrics can be used to design an effective evaluative instrument to assess the impact of leadership on health-care organizations.

  9. Development and evaluation of aperture-based complexity metrics using film and EPID measurements of static MLC openings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Götstedt, Julia; Karlsson Hauer, Anna; Bäck, Anna, E-mail: anna.back@vgregion.se

    Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metricmore » is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge area metric, converted aperture metric, circumference/area ratio, edge metric, and MU/Gy ratio show good linear correlation to the complexity of the MLC openings, expressed as the 5% dose difference pass rate, with Pearson’s r-values of −0.94, −0.88, −0.84, −0.89, and −0.82, respectively. The overall trends for the 3% and 5% dose difference evaluations are similar. Conclusions: New complexity metrics are developed. The calculated scores correlate to the complexity of the created static MLC openings. The complexity of the MLC opening is dependent on the penumbra region relative to the area of the opening. The aperture-based complexity metrics that combined either the distances between the MLC leaves or the MLC opening circumference with the aperture area show the best correlation with the complexity of the static MLC openings.« less

  10. Linking Benthic Macroinvertebrates and Physicochemical Variables for Water Quality Assessment in Saigon River and Its Tributaries, Vietnam

    NASA Astrophysics Data System (ADS)

    Pham, A. D.

    2017-10-01

    The benthic macroinvertebrates living on the bottom channels are one of the most promising of the potential indicators of river health for the Saigon River and its tributaries with hydrochemistry playing a supporting role. An evaluation of the interrelationships within this approach deems necessary. This work identified and tested these relationships to improve the method for water quality assessment. Data from over 4,500 km2 watershed were used as a representative example for the Saigon River and its tributaries. The data covered the period March and September, 2007, 2008, 2009, 2010 and 2015. To implement this evaluation, the analyses were based on accepted the methodology of Mekong River Commission and the studies of scientific group for the biological status assessment. For correlation analyses, the selected environmental variables were compared with the ecological indices, based on benthic macroinvertebrates. The results showed that the metrics of Species Richness, H’, and 1-DS had significant and strong relationships with the water quality variables of DO, BOD5, T_N, and TP (R2 = 0.3751 - 0.8866; P << 0.05). While the metrics of Abundance of benthic macroinvertebrates did not have a statistically significant relationship with any water quality variables (R2 = 0.0000 - 0.0744; P > 0.05). Additionally, the metrics of Species Richness, H’, and 1-DS had negatively correlated with the pH and TSS. Both univariate and multivariate analyses were used to examine the ecological quality of the Saigon River and its tributaries using benthic macroinvertebrates seems to be the most sensitive indicator to correlate with physicochemical variables. This demonstrated that it could be applied to describe the water quality in the Saigon River and its tributaries.

  11. Complexity metric based on fraction of penumbra dose - initial study

    NASA Astrophysics Data System (ADS)

    Bäck, A.; Nordström, F.; Gustafsson, M.; Götstedt, J.; Karlsson Hauer, A.

    2017-05-01

    Volumetric modulated arc therapy improve radiotherapy outcome for many patients compared to conventional three dimensional conformal radiotherapy but require a more extensive, most often measurement based, quality assurance. Multi leaf collimator (MLC) aperture-based complexity metrics have been suggested to be used to distinguish complex treatment plans unsuitable for treatment without time consuming measurements. This study introduce a spatially resolved complexity score that correlate to the fraction of penumbra dose and will give information on the spatial distribution and the clinical relevance of the calculated complexity. The complexity metric is described and an initial study on the correlation between the complexity score and the difference between measured and calculated dose for 30 MLC openings is presented. The result of an analysis of the complexity scores were found to correlate to differences between measurements and calculations with a Pearson’s r-value of 0.97.

  12. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaparpalvi, R; Mynampati, D; Kuo, H

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performedmore » using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm may very well be warranted.« less

  13. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J.; Hodge, B. M.; Florita, A.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less

  14. A Single Conjunction Risk Assessment Metric: the F-Value

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan Clayton; Newman, Lauri K.

    2009-01-01

    The Conjunction Assessment Team at NASA Goddard Space Flight Center provides conjunction risk assessment for many NASA robotic missions. These risk assessments are based on several figures of merit, such as miss distance, probability of collision, and orbit determination solution quality. However, these individual metrics do not singly capture the overall risk associated with a conjunction, making it difficult for someone without this complete understanding to take action, such as an avoidance maneuver. The goal of this analysis is to introduce a single risk index metric that can easily convey the level of risk without all of the technical details. The proposed index is called the conjunction "F-value." This paper presents the concept of the F-value and the tuning of the metric for use in routine Conjunction Assessment operations.

  15. Assessment of macroinvertebrate communities in adjacent urban stream basins, Kansas City, Missouri, metropolitan area, 2007 through 2011

    USGS Publications Warehouse

    Christensen, Eric D.; Krempa, Heather M.

    2013-01-01

    Wastewater-treatment plant discharges during base flow, which elevated specific conductance and nutrient concentrations, combined sewer overflows, and nonpoint sources likely contributed to water-quality impairment and lower aquatic-life status at the Blue River Basin sites. Releases from upstream reservoirs to the Little Blue River likely decreased specific conductance, suspended-sediment, and dissolved constituent concentrations and may have benefitted water quality and aquatic life of main-stem sites. Chloride concentrations in base-flow samples, attributable to winter road salt application, had the highest correlation with the SUII (Spearman’s ρ equals 0.87), were negatively correlated with the SCI (Spearman’s ρ equals -0.53) and several pollution sensitive Ephemeroptera plus Plecoptera plus Trichoptera abundance and percent richness metrics, and were positively correlated with pollution tolerant Oligochaeta abundance and percent richness metrics. Study results show that the easily calculated SUII and the selected modeled multimetric indices are effective for comparing urban basins and for evaluation of water quality in the Kansas City metropolitan area.

  16. Using animation quality metric to improve efficiency of global illumination computation for dynamic environments

    NASA Astrophysics Data System (ADS)

    Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter

    2002-06-01

    In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.

  17. NMF-Based Image Quality Assessment Using Extreme Learning Machine.

    PubMed

    Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun

    2017-01-01

    Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.

  18. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  19. Comparative Assessment of Physical and Social Determinants of Water Quantity and Water Quality Concerns

    NASA Astrophysics Data System (ADS)

    Gunda, T.; Hornberger, G. M.

    2017-12-01

    Concerns over water resources have evolved over time, from physical availability to economic access and recently, to a more comprehensive study of "water security," which is inherently interdisciplinary because a secure water system is influenced by and affects both physical and social components. The concept of water security carries connotations of both an adequate supply of water as well as water that meets certain quality standards. Although the term "water security" has many interpretations in the literature, the research field has not yet developed a synthetic analysis of water security as both a quantity (availability) and quality (contamination) issue. Using qualitative comparative and multi-regression analyses, we evaluate the primary physical and social factors influencing U.S. states' water security from a quantity perspective and from a quality perspective. Water system characteristics are collated from academic and government sources and include access/use, governance, and sociodemographic, and ecosystem metrics. Our analysis indicates differences in variables driving availability and contamination concerns; for example, climate is a more significant determinant in water quantity-based security analyses than in water quality-based security analyses. We will also discuss coevolution of system traits and the merits of constructing a robust water security index based on the relative importance of metrics from our analyses. These insights will improve understanding of the complex interactions between quantity and quality aspects and thus, overall security of water systems.

  20. Mechanistic Sediment Quality Guidelines Based on Contaminant Bioavailability: Equilibrium Partitioning Sediment Benchmarks

    EPA Science Inventory

    Globally, billions of metric tons of contaminated sediments are present in aquatic systems representing a potentially significant ecological risk. Estimated costs to manage (i.e., remediate and monitor) these sediments are in the billions of U.S. dollars. Biologically-based app...

  1. A zone-specific fish-based biotic index as a management tool for the Zeeschelde estuary (Belgium).

    PubMed

    Breine, Jan; Quataert, Paul; Stevens, Maarten; Ollevier, Frans; Volckaert, Filip A M; Van den Bergh, Ericia; Maes, Joachim

    2010-07-01

    Fish-based indices monitor changes in surface waters and are a valuable aid in communication by summarising complex information about the environment (Harrison and Whitfield, 2004). A zone-specific fish-based multimetric estuarine index of biotic integrity (Z-EBI) was developed based on a 13 year time series of fish surveys from the Zeeschelde estuary (Belgium). Sites were pre-classified using indicators of anthropogenic impact. Metrics showing a monotone response with pressure classes were selected for further analysis. Thresholds for the good ecological potential (GEP) were defined from references. A modified trisection was applied for the other thresholds. The Z-EBI is defined by the average of the metric scores calculated over a one year period and translated into an ecological quality ratio (EQR). The indices integrate structural and functional qualities of the estuarine fish communities. The Z-EBI performances were successfully validated for habitat degradation in the various habitat zones. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements

    NASA Technical Reports Server (NTRS)

    Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.

    2016-01-01

    The accuracy and precision of digital image correlation (DIC) is a function of three primary ingredients: image acquisition, image analysis, and the subject of the image. Development of the first two (i.e. image acquisition techniques and image correlation algorithms) has led to widespread use of DIC; however, fewer developments have been focused on the third ingredient. Typically, subjects of DIC images are mechanical specimens with either a natural surface pattern or a pattern applied to the surface. Research in the area of DIC patterns has primarily been aimed at identifying which surface patterns are best suited for DIC, by comparing patterns to each other. Because the easiest and most widespread methods of applying patterns have a high degree of randomness associated with them (e.g., airbrush, spray paint, particle decoration, etc.), less effort has been spent on exact construction of ideal patterns. With the development of patterning techniques such as microstamping and lithography, patterns can be applied to a specimen pixel by pixel from a patterned image. In these cases, especially because the patterns are reused many times, an optimal pattern is sought such that error introduced into DIC from the pattern is minimized. DIC consists of tracking the motion of an array of nodes from a reference image to a deformed image. Every pixel in the images has an associated intensity (grayscale) value, with discretization depending on the bit depth of the image. Because individual pixel matching by intensity value yields a non-unique scale-dependent problem, subsets around each node are used for identification. A correlation criteria is used to find the best match of a particular subset of a reference image within a deformed image. The reader is referred to references for enumerations of typical correlation criteria. As illustrated by Schreier and Sutton and Lu and Cary systematic errors can be introduced by representing the underlying deformation with under-matched shape functions. An important implication, as discussed by Sutton et al., is that in the presence of highly localized deformations (e.g., crack fronts), error can be reduced by minimizing the subset size. In other words, smaller subsets allow the more accurate resolution of localized deformations. Contrarily, the choice of optimal subset size has been widely studied and a general consensus is that larger subsets with more information content are less prone to random error. Thus, an optimal subset size balances the systematic error from under matched deformations with random error from measurement noise. The alternative approach pursued in the current work is to choose a small subset size and optimize the information content within (i.e., optimizing an applied DIC pattern), rather than finding an optimal subset size. In the literature, many pattern quality metrics have been proposed, e.g., sum of square intensity gradient (SSSIG), mean subset fluctuation, gray level co-occurrence, autocorrelation-based metrics, and speckle-based metrics. The majority of these metrics were developed to quantify the quality of common pseudo-random patterns after they have been applied, and were not created with the intent of pattern generation. As such, it is found that none of the metrics examined in this study are fit to be the objective function of a pattern generation optimization. In some cases, such as with speckle-based metrics, application to pixel by pixel patterns is ill-conditioned and requires somewhat arbitrary extensions. In other cases, such as with the SSSIG, it is shown that trivial solutions exist for the optimum of the metric which are ill-suited for DIC (such as a checkerboard pattern). In the current work, a multi-metric optimization method is proposed whereby quality is viewed as a combination of individual quality metrics. Specifically, SSSIG and two auto-correlation metrics are used which have generally competitive objectives. Thus, each metric could be viewed as a constraint imposed upon the others, thereby precluding the achievement of their trivial solutions. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. The resulting pattern, along with randomly generated patterns, is subjected to numerical deformations and analyzed with DIC software. The optimal pattern is shown to outperform randomly generated patterns.

  3. Sleep state classification using pressure sensor mats.

    PubMed

    Baran Pouyan, M; Nourani, M; Pompeo, M

    2015-08-01

    Sleep state detection is valuable in assessing patient's sleep quality and in-bed general behavior. In this paper, a novel classification approach of sleep states (sleep, pre-wake, wake) is proposed that uses only surface pressure sensors. In our method, a mobility metric is defined based on successive pressure body maps. Then, suitable statistical features are computed based on the mobility metric. Finally, a customized random forest classifier is employed to identify various classes including a new class for pre-wake state. Our algorithm achieves 96.1% and 88% accuracies for two (sleep, wake) and three (sleep, pre-wake, wake) class identification, respectively.

  4. Validation of a Quality Management Metric

    DTIC Science & Technology

    2000-09-01

    quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verification and validation of the metric compared the QMM score to an overall program success score for the entire program and yielded positive correlation. The results of applying the QMM can be used to characterize the quality of software management and can serve as a template to improve software management performance. Future work includes further refining the QMM, applying the QMM scores to provide feedback

  5. Compression performance comparison in low delay real-time video for mobile applications

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2012-10-01

    This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.

  6. High-quality cardiopulmonary resuscitation.

    PubMed

    Nolan, Jerry P

    2014-06-01

    The quality of cardiopulmonary resuscitation (CPR) impacts on outcome after cardiac arrest. This review will explore the factors that contribute to high-quality CPR and the metrics that can be used to monitor performance. A recent consensus statement from North America defined five key components of high-quality CPR: minimizing interruptions in chest compressions, providing compressions of adequate rate and depth, avoiding leaning on the chest between compressions, and avoiding excessive ventilation. Studies have shown that real-time feedback devices improve the quality of CPR and, in one before-and-after study, outcome from out-of-hospital cardiac arrest. There is evidence for increasing survival rates following out-of-hospital cardiac arrest and this is associated with increasing rates of bystander CPR. The quality of CPR provided by healthcare professionals can be improved with real-time feedback devices. The components of high-quality CPR and the metrics that can be measured and fed back to healthcare professionals have been defined by expert consensus. In the future, real-time feedback based on the physiological responses to CPR may prove more effective.

  7. Determination of a Screening Metric for High Diversity DNA Libraries.

    PubMed

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A

    2016-01-01

    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  8. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    PubMed Central

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  9. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    PubMed

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  10. Intelligent Systems Approaches to Product Sound Quality Analysis

    NASA Astrophysics Data System (ADS)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach. Next, an unsupervised jury clustering algorithm is used to identify and classify subgroups within a jury who have conflicting preferences. In addition, a nested Artificial Neural Network (ANN) architecture is developed to predict subjective preference based on objective sound quality metrics, in the presence of non-linear preferences. Finally, statistical decomposition and correlation algorithms are reviewed that can help an analyst establish a clear understanding of the variability of the product sounds used as inputs into the jury study and to identify correlations between preference scores and sound quality metrics in the presence of non-linearities.

  11. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  12. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  13. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  14. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  15. A framework for quantification of groundwater dynamics - redundancy and transferability of hydro(geo-)logical metrics

    NASA Astrophysics Data System (ADS)

    Heudorfer, Benedikt; Haaf, Ezra; Barthel, Roland; Stahl, Kerstin

    2017-04-01

    A new framework for quantification of groundwater dynamics has been proposed in a companion study (Haaf et al., 2017). In this framework, a number of conceptual aspects of dynamics, such as seasonality, regularity, flashiness or inter-annual forcing, are described, which are then linked to quantitative metrics. Hereby, a large number of possible metrics are readily available from literature, such as Pardé Coefficients, Colwell's Predictability Indices or Base Flow Index. In the present work, we focus on finding multicollinearity and in consequence redundancy among the metrics representing different patterns of dynamics found in groundwater hydrographs. This is done also to verify the categories of dynamics aspects suggested by Haaf et al., 2017. To determine the optimal set of metrics we need to balance the desired minimum number of metrics and the desired maximum descriptive property of the metrics. To do this, a substantial number of candidate metrics are applied to a diverse set of groundwater hydrographs from France, Germany and Austria within the northern alpine and peri-alpine region. By applying Principle Component Analysis (PCA) to the correlation matrix of the metrics, we determine a limited number of relevant metrics that describe the majority of variation in the dataset. The resulting reduced set of metrics comprise an optimized set that can be used to describe the aspects of dynamics that were identified within the groundwater dynamics framework. For some aspects of dynamics a single significant metric could be attributed. Other aspects have a more fuzzy quality that can only be described by an ensemble of metrics and are re-evaluated. The PCA is furthermore applied to groups of groundwater hydrographs containing regimes of similar behaviour in order to explore transferability when applying the metric-based characterization framework to groups of hydrographs from diverse groundwater systems. In conclusion, we identify an optimal number of metrics, which are readily available for usage in studies on groundwater dynamics, intended to help overcome analytical limitations that exist due to the complexity of groundwater dynamics. Haaf, E., Heudorfer, B., Stahl, K., Barthel, R., 2017. A framework for quantification of groundwater dynamics - concepts and hydro(geo-)logical metrics. EGU General Assembly 2017, Vienna, Austria.

  16. Clustered-dot halftoning with direct binary search.

    PubMed

    Goyal, Puneet; Gupta, Madhur; Staelin, Carl; Fischer, Mani; Shacham, Omri; Allebach, Jan P

    2013-02-01

    In this paper, we present a new algorithm for aperiodic clustered-dot halftoning based on direct binary search (DBS). The DBS optimization framework has been modified for designing clustered-dot texture, by using filters with different sizes in the initialization and update steps of the algorithm. Following an intuitive explanation of how the clustered-dot texture results from this modified framework, we derive a closed-form cost metric which, when minimized, equivalently generates stochastic clustered-dot texture. An analysis of the cost metric and its influence on the texture quality is presented, which is followed by a modification to the cost metric to reduce computational cost and to make it more suitable for screen design.

  17. Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering

    NASA Astrophysics Data System (ADS)

    Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier

    2012-01-01

    We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.

  18. Application of online measures to monitor and evaluate multiplatform fusion performance

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kowalski, Charlene; Klamer, Dale M.

    1999-07-01

    A primary concern of multiplatform data fusion is assessing the quality and utility of data shared among platforms. Constraints such as platform and sensor capability and task load necessitate development of an on-line system that computes a metric to determine which other platform can provide the best data for processing. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. Entropy measures quality of processed information such as localization, classification, and ambiguity in measurement-to-track association. Lower entropy scores imply less uncertainty about a particular target. When new information is provided, we compuete the level of improvement a particular track obtains from one measurement to another. The measure permits us to evaluate the utility of the new information. We couple entropy with intelligent agents that provide two main data gathering functions: estimation of another platform's performance and evaluation of the new measurement data's quality. Both functions result from the entropy metric. The intelligent agent on a platform makes an estimate of another platform's measurement and provides it to its own fusion system, which can then incorporate it, for a particular target. A resulting entropy measure is then calculated and returned to its own agent. From this metric, the agent determines a perceived value of the offboard platform's measurement. If the value is satisfactory, the agent requests the measurement from the other platform, usually by interacting with the other platform's agent. Once the actual measurement is received, again entropy is computed and the agent assesses its estimation process and refines it accordingly.

  19. Automated characterization of perceptual quality of clinical chest radiographs: Validation and calibration to observer preference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, Ehsan, E-mail: samei@duke.edu; Lin, Yuan; Choudhury, Kingshuk R.

    Purpose: The authors previously proposed an image-based technique [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] to assess the perceptual quality of clinical chest radiographs. In this study, an observer study was designed and conducted to validate the output of the program against rankings by expert radiologists and to establish the ranges of the output values that reflect the acceptable image appearance so the program output can be used for image quality optimization and tracking. Methods: Using an IRB-approved protocol, 2500 clinical chest radiographs (PA/AP) were collected from our clinical operation. The images were processed through our perceptual qualitymore » assessment program to measure their appearance in terms of ten metrics of perceptual image quality: lung gray level, lung detail, lung noise, rib–lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm–lung contrast, and subdiaphragm area. From the results, for each targeted appearance attribute/metric, 18 images were selected such that the images presented a relatively constant appearance with respect to all metrics except the targeted one. The images were then incorporated into a graphical user interface, which displayed them into three panels of six in a random order. Using a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions, each of five participating attending chest radiologists was tasked to spatially order the images based only on the targeted appearance attribute regardless of the other qualities. Once ordered, the observer also indicated the range of image appearances that he/she considered clinically acceptable. The observer data were analyzed in terms of the correlations between the observer and algorithmic rankings and interobserver variability. An observer-averaged acceptable image appearance was also statistically derived for each quality attribute based on the collected individual acceptable ranges. Results: The observer study indicated that, for each image quality attribute, the averaged observer ranking strongly correlated with the algorithmic ranking (linear correlation coefficient R > 0.92), with highest correlation (R = 1) for lung gray level and the lowest (R = 0.92) for mediastinum noise. There was a strong concordance between the observers in terms of their rankings (i.e., Kendall’s tau agreement > 0.84). The observers also generally indicated similar tolerance and preference levels in terms of acceptable ranges, as 85% of the values were close to the overall tolerance or preference levels and the differences were smaller than 0.15. Conclusions: The observer study indicates that the previously proposed technique provides a robust reflection of the perceptual image quality in clinical images. The results established the range of algorithmic outputs for each metric that can be used to quantitatively assess and qualify the appearance quality of clinical chest radiographs.« less

  20. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  1. Disturbance metrics predict a wetland Vegetation Index of Biotic Integrity

    USGS Publications Warehouse

    Stapanian, Martin A.; Mack, John; Adams, Jean V.; Gara, Brian; Micacchion, Mick

    2013-01-01

    Indices of biological integrity of wetlands based on vascular plants (VIBIs) have been developed in many areas in the USA. Knowledge of the best predictors of VIBIs would enable management agencies to make better decisions regarding mitigation site selection and performance monitoring criteria. We use a novel statistical technique to develop predictive models for an established index of wetland vegetation integrity (Ohio VIBI), using as independent variables 20 indices and metrics of habitat quality, wetland disturbance, and buffer area land use from 149 wetlands in Ohio, USA. For emergent and forest wetlands, predictive models explained 61% and 54% of the variability, respectively, in Ohio VIBI scores. In both cases the most important predictor of Ohio VIBI score was a metric that assessed habitat alteration and development in the wetland. Of secondary importance as a predictor was a metric that assessed microtopography, interspersion, and quality of vegetation communities in the wetland. Metrics and indices assessing disturbance and land use of the buffer area were generally poor predictors of Ohio VIBI scores. Our results suggest that vegetation integrity of emergent and forest wetlands could be most directly enhanced by minimizing substrate and habitat disturbance within the wetland. Such efforts could include reducing or eliminating any practices that disturb the soil profile, such as nutrient enrichment from adjacent farm land, mowing, grazing, or cutting or removing woody plants.

  2. Modeling the interannual variability of microbial quality metrics of irrigation water in a Pennsylvania stream.

    PubMed

    Hong, Eun-Mi; Shelton, Daniel; Pachepsky, Yakov A; Nam, Won-Ho; Coppock, Cary; Muirhead, Richard

    2017-02-01

    Knowledge of the microbial quality of irrigation waters is extremely limited. For this reason, the US FDA has promulgated the Produce Rule, mandating the testing of irrigation water sources for many farms. The rule requires the collection and analysis of at least 20 water samples over two to four years to adequately evaluate the quality of water intended for produce irrigation. The objective of this work was to evaluate the effect of interannual weather variability on surface water microbial quality. We used the Soil and Water Assessment Tool model to simulate E. coli concentrations in the Little Cove Creek; this is a perennial creek located in an agricultural watershed in south-eastern Pennsylvania. The model performance was evaluated using the US FDA regulatory microbial water quality metrics of geometric mean (GM) and the statistical threshold value (STV). Using the 90-year time series of weather observations, we simulated and randomly sampled the time series of E. coli concentrations. We found that weather conditions of a specific year may strongly affect the evaluation of microbial quality and that the long-term assessment of microbial water quality may be quite different from the evaluation based on short-term observations. The variations in microbial concentrations and water quality metrics were affected by location, wetness of the hydrological years, and seasonality, with 15.7-70.1% of samples exceeding the regulatory threshold. The results of this work demonstrate the value of using modeling to design and evaluate monitoring protocols to assess the microbial quality of water used for produce irrigation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A comparison of metrics to evaluate the effects of hydro-facility passage stressors on fish

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colotelo, Alison H.; Goldman, Amy E.; Wagner, Katie A.

    Hydropower is the most common form of renewable energy, and countries worldwide are considering expanding hydropower to new areas. One of the challenges of hydropower deployment is mitigation of the environmental impacts including water quality, habitat alterations, and ecosystem connectivity. For fish species that inhabit river systems with hydropower facilities, passage through the facility to access spawning and rearing habitats can be particularly challenging. Fish moving downstream through a hydro-facility can be exposed to a number of stressors (e.g., rapid decompression, shear forces, blade strike and collision, and turbulence), which can all affect fish survival in direct and indirect ways.more » Many studies have investigated the effects of hydro-turbine passage on fish; however, the comparability among studies is limited by variation in the metrics and biological endpoints used. Future studies investigating the effects of hydro-turbine passage should focus on using metrics and endpoints that are easily comparable. This review summarizes four categories of metrics that are used in fisheries research and have application to hydro-turbine passage (i.e., mortality, injury, molecular metrics, behavior) and evaluates them based on several criteria (i.e., resources needed, invasiveness, comparability among stressors and species, and diagnostic properties). Additionally, these comparisons are put into context of study setting (i.e., laboratory vs. field). Overall, injury and molecular metrics are ideal for studies in which there is a need to understand the mechanisms of effect, whereas behavior and mortality metrics provide information on the whole body response of the fish. The study setting strongly influences the comparability among studies. In laboratory-based studies, stressors can be controlled by both type and magnitude, allowing for easy comparisons among studies. In contrast, field studies expose fish to realistic passage environments but the comparability is limited. Based on these results, future studies, whether lab or field-based, should focus on metrics that relate to mortality for ease of comparison.« less

  4. Multidisciplinary life cycle metrics and tools for green buildings.

    PubMed

    Helgeson, Jennifer F; Lippiatt, Barbara C

    2009-07-01

    Building sector stakeholders need compelling metrics, tools, data, and case studies to support major investments in sustainable technologies. Proponents of green building widely claim that buildings integrating sustainable technologies are cost effective, but often these claims are based on incomplete, anecdotal evidence that is difficult to reproduce and defend. The claims suffer from 2 main weaknesses: 1) buildings on which claims are based are not necessarily "green" in a science-based, life cycle assessment (LCA) sense and 2) measures of cost effectiveness often are not based on standard methods for measuring economic worth. Yet, the building industry demands compelling metrics to justify sustainable building designs. The problem is hard to solve because, until now, neither methods nor robust data supporting defensible business cases were available. The US National Institute of Standards and Technology (NIST) Building and Fire Research Laboratory is beginning to address these needs by developing metrics and tools for assessing the life cycle economic and environmental performance of buildings. Economic performance is measured with the use of standard life cycle costing methods. Environmental performance is measured by LCA methods that assess the "carbon footprint" of buildings, as well as 11 other sustainability metrics, including fossil fuel depletion, smog formation, water use, habitat alteration, indoor air quality, and effects on human health. Carbon efficiency ratios and other eco-efficiency metrics are established to yield science-based measures of the relative worth, or "business cases," for green buildings. Here, the approach is illustrated through a realistic building case study focused on different heating, ventilation, air conditioning technology energy efficiency. Additionally, the evolution of the Building for Environmental and Economic Sustainability multidisciplinary team and future plans in this area are described.

  5. Proteomics Quality Control: Quality Control Software for MaxQuant Results.

    PubMed

    Bielow, Chris; Mastrobuoni, Guido; Kempa, Stefan

    2016-03-04

    Mass spectrometry-based proteomics coupled to liquid chromatography has matured into an automatized, high-throughput technology, producing data on the scale of multiple gigabytes per instrument per day. Consequently, an automated quality control (QC) and quality analysis (QA) capable of detecting measurement bias, verifying consistency, and avoiding propagation of error is paramount for instrument operators and scientists in charge of downstream analysis. We have developed an R-based QC pipeline called Proteomics Quality Control (PTXQC) for bottom-up LC-MS data generated by the MaxQuant software pipeline. PTXQC creates a QC report containing a comprehensive and powerful set of QC metrics, augmented with automated scoring functions. The automated scores are collated to create an overview heatmap at the beginning of the report, giving valuable guidance also to nonspecialists. Our software supports a wide range of experimental designs, including stable isotope labeling by amino acids in cell culture (SILAC), tandem mass tags (TMT), and label-free data. Furthermore, we introduce new metrics to score MaxQuant's Match-between-runs (MBR) functionality by which peptide identifications can be transferred across Raw files based on accurate retention time and m/z. Last but not least, PTXQC is easy to install and use and represents the first QC software capable of processing MaxQuant result tables. PTXQC is freely available at https://github.com/cbielow/PTXQC .

  6. A Comparison of the Performance of Efficient Data Analysis Versus Fine Particle Dose as Metrics for the Quality Control of Aerodynamic Particle Size Distributions of Orally Inhaled Pharmaceuticals.

    PubMed

    Tougas, Terrence P; Goodey, Adrian P; Hardwell, Gareth; Mitchell, Jolyon; Lyapustina, Svetlana

    2017-02-01

    The performance of two quality control (QC) tests for aerodynamic particle size distributions (APSD) of orally inhaled drug products (OIPs) is compared. One of the tests is based on the fine particle dose (FPD) metric currently expected by the European regulators. The other test, called efficient data analysis (EDA), uses the ratio of large particle mass to small particle mass (LPM/SPM), along with impactor sized mass (ISM), to detect changes in APSD for QC purposes. The comparison is based on analysis of APSD data from four products (two different pressurized metered dose inhalers (MDIs) and two dry powder inhalers (DPIs)). It is demonstrated that in each case, EDA is able to detect shifts and abnormalities that FPD misses. The lack of sensitivity on the part of FPD is due to its "aggregate" nature, since FPD is a univariate measure of all particles less than about 5 μm aerodynamic diameter, and shifts or changes within the range encompassed by this metric may go undetected. EDA is thus shown to be superior to FPD for routine control of OIP quality. This finding augments previously reported superiority of EDA compared with impactor stage groupings (favored by US regulators) for incorrect rejections (type I errors) when incorrect acceptances (type II errors) were adjusted to the same probability for both approaches. EDA is therefore proposed as a method of choice for routine quality control of OIPs in both European and US regulatory environments.

  7. Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.

    PubMed

    Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N

    2017-05-01

    Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies inherent to traditional consultation models, novel productivity metrics are proposed. Further research is needed to determine optimal metrics for monitoring productivity within PPC teams. Innovative approaches should be studied with the goal of improving efficiency of care without compromising value. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  8. Centers for medicare and medicaid services: using an episode-based payment model to improve oncology care.

    PubMed

    Kline, Ronald M; Bazell, Carol; Smith, Erin; Schumacher, Heidi; Rajkumar, Rahul; Conway, Patrick H

    2015-03-01

    Cancer is a medically complex and expensive disease with costs projected to rise further as new treatment options increase and the United States population ages. Studies showing significant regional variation in oncology quality and costs and model tests demonstrating cost savings without adverse outcomes suggest there are opportunities to create a system of oncology care in the US that delivers higher quality care at lower cost. The Centers for Medicare and Medicaid Services (CMS) have designed an episode-based payment model centered around 6 month periods of chemotherapy treatment. Monthly per-patient care management payments will be made to practices to support practice transformation, including additional patient services and specific infrastructure enhancements. Quarterly reporting of quality metrics will drive continuous quality improvement and the adoption of best practices among participants. Practices achieving cost savings will also be eligible for performance-based payments. Savings are expected through improved care coordination and appropriately aligned payment incentives, resulting in decreased avoidable emergency department visits and hospitalizations and more efficient and evidence-based use of imaging, laboratory tests, and therapeutic agents, as well as improved end of life care. New therapies and better supportive care have significantly improved cancer survival in recent decades. This has come at a high cost, with cancer therapy consuming $124 billion in 2010. CMS has designed an episode-based model of oncology care that incorporates elements from several successful model tests. By providing care management and performance based payments in conjunction with quality metrics and a rapid learning environment, it is hoped that this model will demonstrate how oncology care in the US can transform into a high value, high quality system. Copyright © 2015 by American Society of Clinical Oncology.

  9. Quality Markers in Cardiology. Main Markers to Measure Quality of Results (Outcomes) and Quality Measures Related to Better Results in Clinical Practice (Performance Metrics). INCARDIO (Indicadores de Calidad en Unidades Asistenciales del Área del Corazón): A SEC/SECTCV Consensus Position Paper.

    PubMed

    López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis

    2015-11-01

    Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  10. Visual quality analysis for images degraded by different types of noise

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Ieremeyev, Oleg I.; Egiazarian, Karen O.; Astola, Jaakko T.

    2013-02-01

    Modern visual quality metrics take into account different peculiarities of the Human Visual System (HVS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNRHVS- M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HVS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HVS-metrics.

  11. Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data

    PubMed Central

    Carroll, Thomas S.; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines

    2014-01-01

    With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency. PMID:24782889

  12. Implementation of the affordable care act: a case study of a service line co-management company.

    PubMed

    Lanese, Bethany

    2016-09-19

    Purpose The purpose of this paper is to test and measure the outcome of a community hospital in implementing the Affordable Care Act (ACA) through a co-management arrangement. RQ1: do the benefits of a co-management arrangement outweigh the costs? RQ2: does physician alignment aid in the effective implementation of the ACA directives set for hospitals? Design/methodology/approach A case study of a 350-bed non-profit community hospital co-management company. The quantitative data are eight quarters of quality metrics prior and eight quarters post establishment of the co-management company. The quality metrics are all based on standardized national requirements from the Joint Commission and Centers for Medicare and Medicaid Services guidelines. These measures directly impact the quality initiatives under the ACA that are applicable to all healthcare facilities. Qualitative data include survey results from hospital employees of the perceived effectiveness of the co-management company. A paired samples difference of means t-test was conducted to compare the timeframe before co-management and post co-management. Findings The findings indicate that the benefits of a co-management arrangement do outweigh the costs for both the physicians and the hospital ( RQ1). The physicians benefit through actual dollar payout, but also with improved communication and greater input in running the service line. The hospital benefits from reduced cost - or reduced penalties under the ACA - as well as better communication and greater physician involvement in administration of the service line. RQ2: does physician alignment aid in the effective implementation of the ACA directives set for hospitals? The hospital improved in every quality metric under the co-management company. A paired sample difference of means t-test showed a statistically significant improvement in five of the six quality metrics in the study. Originality/value Previous research indicates the potential effectiveness of co-management companies in improving healthcare delivery and hospital-physician relations (Sowers et al., 2013). The current research takes this a step further to show that the data do in fact support these concepts. The hospital and the physicians carrying out the day-to-day actions have shared goals, better communication, and improved quality metrics under the co-management company. As the number of co-management companies increases across the USA, more research can be directed at determining their overall impact on quality care.

  13. FQC Dashboard: integrates FastQC results into a web-based, interactive, and extensible FASTQ quality control tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Joseph; Pirrung, Meg; McCue, Lee Ann

    FQC is software that facilitates quality control of FASTQ files by carrying out a QC protocol using FastQC, parsing results, and aggregating quality metrics into an interactive dashboard designed to richly summarize individual sequencing runs. The dashboard groups samples in dropdowns for navigation among the data sets, utilizes human-readable configuration files to manipulate the pages and tabs, and is extensible with CSV data.

  14. FQC Dashboard: integrates FastQC results into a web-based, interactive, and extensible FASTQ quality control tool

    DOE PAGES

    Brown, Joseph; Pirrung, Meg; McCue, Lee Ann

    2017-06-09

    FQC is software that facilitates quality control of FASTQ files by carrying out a QC protocol using FastQC, parsing results, and aggregating quality metrics into an interactive dashboard designed to richly summarize individual sequencing runs. The dashboard groups samples in dropdowns for navigation among the data sets, utilizes human-readable configuration files to manipulate the pages and tabs, and is extensible with CSV data.

  15. Effects of land use, stream habitat, and water quality on biological communities of wadeable streams in the Illinois River Basin of Arkansas, 2011 and 2012

    USGS Publications Warehouse

    Petersen, James C.; Justus, B.G.; Meredith, Bradley J.

    2014-01-01

    The Illinois River Basin includes an area of diverse land use in northwestern Arkansas. Land-use data collected in 2006 indicate that most of the land in the basin is agricultural. The agricultural land is used primarily for production of poultry and cattle. Eighteen sites were selected from the list of candidate sites based on drainage area, land use, presence or absence of an upstream wastewater-treatment plant, water quality, and other information gathered during the reconnaissance. An important consideration in the process was to select sites along gradients of forest to urban land use and forest to agricultural land use. Water-quality samples were collected for analysis of nutrients, and a multiparameter field meter was used to measure water temperature, specific conductance, pH, and dissolved oxygen. Streamflow was measured immediately following the water-quality sampling. Macroalgae coverage was estimated and periphyton, macroinvertebrate, and fish communities were sampled at each site. Stream habitat also was assessed. Many types of land-use, water-quality, and habitat factors affected one or more aspects of the biological communities. Several macroinvertebrate and fish metrics changed in response to changes in percent forest; sites that would be considered most disturbed, based on these metrics, are sites with the highest percentages of urban land use in their associated basins. The presence of large mats of macroalgae was one of the most noticeable biological characteristics in several streams within the Illinois River Basin. The highest macroalgae percent cover values were recorded at four sites downstream from wastewater-treatment plants. Macroalgae percent cover was strongly correlated only with bed substrate size, canopy closure, and specific conductance. Periphyton metrics were most often and most strongly correlated with riparian shading, specific conductance, substrate turbidity, percent agriculture, poultry house density, and unpaved road density; some of these factors were strongly correlated with percent forest, percent urban, or percent agriculture. Total biovolume of periphyton was not strongly correlated with any of the land use, habitat, or water-quality factors assessed in the present study. Although algal growth typically increases with higher nutrient concentrations and less shading, the standing crop of periphyton on rocks can be reduced by herbivorous macroinvertebrates and fish, which may explain why total biovolume in Ozark streams was not strongly affected by water-quality (or other habitat) factors. A macroinvertebrate index and several macroinvertebrate metrics were adversely affected by increasing urban and agricultural land use and associated environmental factors. Factors most commonly affecting the index and metrics included factors associated with water quality, stream geometry, sediment, land-use percentages, and road density. In general, the macroinvertebrate index was higher (indicative of least disturbance) at sites with greater percentages of forest in their basins, lower percentages of urban land in their basins, and lower paved road density. Upstream wastewater-treatment plants affected several metrics. For example, three of the five lowest macroinvertebrate index scores, two of the five lowest percent predator values, and two of the five highest percent gatherer-collector values were at sites downstream from wastewater-treatment plants. The Ozark Highlands fish index of biotic integrity and several fish metrics were adversely affected by increasing urban and agricultural land use and associated factors. Factors affecting these metrics included factors associated with nutrients, sediment, and shading. In general, the fish index of biotic integrity was higher at sites with higher percentages of forest in their basins, lower percentages of urban land in their basins, higher unpaved road density, and lower paved and total road density. Upstream wastewater-treatment plants seemed to affect some fish community metrics substantially but had little effect on other metrics. For example, three of the five lowest relative abundances of lithophilic spawner minus stonerollers and four of the five highest stoneroller abundances were at sites downstream from wastewater-treatment plants. Interpretations of the results of the study described in this report are limited by a number of factors. These factors individually and collectively add to uncertainty and variability in the responses to various environmental stresses. Notwithstanding the limiting factors, the biological responses of macroalgae cover and periphyton, macroinvertebrate, and fish metrics to environmental variables provide multiple lines of evidence that biological communities of these streams are affected by recent and ongoing land-use practices. For several biological metrics there appears to be a threshold of about 40 to 50 percent forest where values of these metrics change in magnitude. However, the four sites with more than 50 percent forest in their basins were the four sites sampled in late May–early June of 2012 (rather than July–August of 2011). The relative influence of season and forest percentage on the biological communities at these sites is unknown.

  16. 75 FR 5040 - Extension of Period for Comments on Enhancement in the Quality of Patents

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-01

    ... patents, to identify appropriate indicia of quality, and to establish metrics for the measurement of the... issued patents, to identify appropriate indicia of quality, and to establish metrics for the measurement.... Kappos, Under Secretary of Commerce for Intellectual Property and Director of the United States Patent...

  17. SIMPATIQCO: a server-based software suite which facilitates monitoring the time course of LC-MS performance metrics on Orbitrap instruments.

    PubMed

    Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl

    2012-11-02

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.

  18. SIMPATIQCO: A Server-Based Software Suite Which Facilitates Monitoring the Time Course of LC–MS Performance Metrics on Orbitrap Instruments

    PubMed Central

    2012-01-01

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386

  19. An intermediate significant bit (ISB) watermarking technique using neural networks.

    PubMed

    Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna

    2016-01-01

    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.

  20. Substantial Progress Yet Significant Opportunity for Improvement in Stroke Care in China.

    PubMed

    Li, Zixiao; Wang, Chunjuan; Zhao, Xingquan; Liu, Liping; Wang, Chunxue; Li, Hao; Shen, Haipeng; Liang, Li; Bettger, Janet; Yang, Qing; Wang, David; Wang, Anxin; Pan, Yuesong; Jiang, Yong; Yang, Xiaomeng; Zhang, Changqing; Fonarow, Gregg C; Schwamm, Lee H; Hu, Bo; Peterson, Eric D; Xian, Ying; Wang, Yilong; Wang, Yongjun

    2016-11-01

    Stroke is a leading cause of death in China. Yet the adherence to guideline-recommended ischemic stroke performance metrics in the past decade has been previously shown to be suboptimal. Since then, several nationwide stroke quality management initiatives have been conducted in China. We sought to determine whether adherence had improved since then. Data were obtained from the 2 phases of China National Stroke Registries, which included 131 hospitals (12 173 patients with acute ischemic stroke) in China National Stroke Registries phase 1 from 2007 to 2008 versus 219 hospitals (19 604 patients) in China National Stroke Registries phase 2 from 2012 to 2013. Multiple regression models were developed to evaluate the difference in adherence to performance measure between the 2 study periods. The overall quality of care has improved over time, as reflected by the higher composite score of 0.76 in 2012 to 2013 versus 0.63 in 2007 to 2008. Nine of 13 individual performance metrics improved. However, there were no significant improvements in the rates of intravenous thrombolytic therapy and anticoagulation for atrial fibrillation. After multivariate analysis, there remained a significant 1.17-fold (95% confidence interval, 1.14-1.21) increase in the odds of delivering evidence-based performance metrics in the more recent time periods versus older data. The performance metrics with the most significantly increased odds included stroke education, dysphagia screening, smoking cessation, and antithrombotics at discharge. Adherence to stroke performance metrics has increased over time, but significant opportunities remain for further improvement. Continuous stroke quality improvement program should be developed as a national priority in China. © 2016 American Heart Association, Inc.

  1. Measurement of the Inter-Rater Reliability Rate Is Mandatory for Improving the Quality of a Medical Database: Experience with the Paulista Lung Cancer Registry.

    PubMed

    Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M

    2018-06-01

    Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  2. SU-F-T-312: Identifying Distinct Radiation Therapy Plan Classes Through Multi-Dimensional Analysis of Plan Complexity Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, V; Labby, Z; Culberson, W

    Purpose: To determine whether body site-specific treatment plans form unique “plan class” clusters in a multi-dimensional analysis of plan complexity metrics such that a single beam quality correction determined for a representative plan could be universally applied within the “plan class”, thereby increasing the dosimetric accuracy of a detector’s response within a subset of similarly modulated nonstandard deliveries. Methods: We collected 95 clinical volumetric modulated arc therapy (VMAT) plans from four body sites (brain, lung, prostate, and spine). The lung data was further subdivided into SBRT and non-SBRT data for a total of five plan classes. For each control pointmore » in each plan, a variety of aperture-based complexity metrics were calculated and stored as unique characteristics of each patient plan. A multiple comparison of means analysis was performed such that every plan class was compared to every other plan class for every complexity metric in order to determine which groups could be considered different from one another. Statistical significance was assessed after correcting for multiple hypothesis testing. Results: Six out of a possible 10 pairwise plan class comparisons were uniquely distinguished based on at least nine out of 14 of the proposed metrics (Brain/Lung, Brain/SBRT lung, Lung/Prostate, Lung/SBRT Lung, Lung/Spine, Prostate/SBRT Lung). Eight out of 14 of the complexity metrics could distinguish at least six out of the possible 10 pairwise plan class comparisons. Conclusion: Aperture-based complexity metrics could prove to be useful tools to quantitatively describe a distinct class of treatment plans. Certain plan-averaged complexity metrics could be considered unique characteristics of a particular plan. A new approach to generating plan-class specific reference (pcsr) fields could be established through a targeted preservation of select complexity metrics or a clustering algorithm that identifies plans exhibiting similar modulation characteristics. Measurements and simulations will better elucidate potential plan-class specific dosimetry correction factors.« less

  3. PQScal (Power Quality Score Calculation for Distribution Systems with DER Integration)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Power Quality is of great importance to evaluate the “health” of a distribution system, especially when the distributed energy resource (DER) penetration becomes more significant. The individual components that make up power quality, such as voltage magnitude and unbalance, can be measured in simulations or in the field, however, a comprehensive method to incorporate all of these values into a single score doesn't exist. As a result, we propose a methodology to quantify the power quality health using the single number value, named as Power Quality Score (PQS). The PQS is dependent on six metrics that are developed based onmore » both components that directly impact power quality and those are often reference in the context of power quality. These six metrics are named as System Average Voltage Magnitude Violation Index (SAVMVI), System Average Voltage Fluctuation Index (SAVFI), System Average Voltage Unbalance Index (SAVUI), System Control Device Operation Index (SCDOI), System Reactive Power Demand Index (SRPDI) and System Energy Loss Index (SELI). This software tool, PQScal, is developed based on this novel PQS methodology. Besides of traditional distribution systems, PQScal can also measure the power quality for distribution systems with various DER penetrations. PQScal has been tested on two utility distribution feeders with distinct model characteristics and its effectiveness has been proved. In sum, PQScal can help utilities or other parties to measure the power quality of distribution systems with DER integration easily and effectively.« less

  4. Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate

    NASA Astrophysics Data System (ADS)

    Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.

    2017-12-01

    Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.

  5. Task-based detectability comparison of exponential transformation of free-response operating characteristic (EFROC) curve and channelized Hotelling observer (CHO)

    NASA Astrophysics Data System (ADS)

    Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly

    2016-03-01

    This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.

  6. Effects of multi-scale environmental characteristics on agricultural stream biota in eastern Wisconsin

    USGS Publications Warehouse

    Fitzpatrick, F.A.; Scudder, B.C.; Lenz, B.N.; Sullivan, D.J.

    2001-01-01

    The U.S. Geological Survey examined 25 agricultural streams in eastern Wisconsin to determine relations between fish, invertebrate, and algal metrics and multiple spatial scales of land cover, geologic setting, hydrologic, aquatic habitat, and water chemistry data. Spearman correlation and redundancy analyses were used to examine relations among biotic metrics and environmental characteristics. Riparian vegetation, geologic, and hydrologic conditions affected the response of biotic metrics to watershed agricultural land cover but the relations were aquatic assemblage dependent. It was difficult to separate the interrelated effects of geologic setting, watershed and buffer land cover, and base flow. Watershed and buffer land cover, geologic setting, reach riparian vegetation width, and stream size affected the fish IBI, invertebrate diversity, diatom IBI, and number of algal taxa; however, the invertebrate FBI, percentage of EPT, and the diatom pollution index were more influenced by nutrient concentrations and flow variability. Fish IBI scores seemed most sensitive to land cover in the entire stream network buffer, more so than watershed-scale land cover and segment or reach riparian vegetation width. All but one stream with more than approximately 10 percent buffer agriculture had fish IBI scores of fair or poor. In general, the invertebrate and algal metrics used in this study were not as sensitive to land cover effects as fish metrics. Some of the reach-scale characteristics, such as width/depth ratios, velocity, and bank stability, could be related to watershed influences of both land cover and geologic setting. The Wisconsin habitat index was related to watershed geologic setting, watershed and buffer land cover, riparian vegetation width, and base flow, and appeared to be a good indicator of stream quality. Results from this study emphasize the value of using more than one or two biotic metrics to assess water quality and the importance of environmental characteristics at multiple scales.

  7. Benefits of utilizing CellProfiler as a characterization tool for U–10Mo nuclear fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; Douglas, J.; Patterson, L.

    2015-07-15

    Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium–molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries. - Graphical abstract: Display Omitted - Highlights: • A technique is developed to score U–10Mo FIB-SEM image quality using CellProfiler. • The pass/fail metric is based on image illumination, focus, and area scratched. • Automated image analysis is performed in pipeline fashion to characterize images. • Fission gas void, interaction layer, and grain boundary coverage data is extracted. • Preliminary characterization results demonstrate consistency of the algorithm.« less

  8. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  9. Historical assessments and comparisons of benthic communities and physical habitat in two agricultural streams in California's San Joaquin watershed.

    PubMed

    Hall, Lenwood W; Killen, William D

    2006-01-01

    This study was designed to assess trends in physical habitat and benthic communities (macroinvertebrates) annually in two agricultural streams (Del Puerto Creek and Salt Slough) in California's San Joaquin Valley from 2001 to 2005, determine the relationship between benthic communities and both water quality and physical habitat from both streams over the 5-year period, and compare benthic communities and physical habitat in both streams from 2001 to 2005. Physical habitat, measured with 10 metrics and a total score, was reported to be fairly stable over 5 years in Del Puerto Creek but somewhat variable in Salt Slough. Benthic communities, measured with 18 metrics, were reported to be marginally variable over time in Del Puerto Creek but fairly stable in Salt Slough. Rank correlation analysis for both water bodies combined showed that channel alteration, embeddedness, riparian buffer, and velocity/depth/diversity were the most important physical habitat metrics influencing the various benthic metrics. Correlations of water quality parameters and benthic community metrics for both water bodies combined showed that turbidity, dissolved oxygen, and conductivity were the most important water quality parameters influencing the different benthic metrics. A comparison of physical habitat metrics (including total score) for both water bodies over the 5-year period showed that habitat metrics were more positive in Del Puerto Creek when compared to Salt Slough. A comparison of benthic metrics in both water bodies showed that approximately one-third of the metrics were significantly different between the two water bodies. Generally, the more positive benthic metric scores were reported in Del Puerto Creek, which suggests that the communities in this creek are more robust than Salt Slough.

  10. Understanding Acceptance of Software Metrics--A Developer Perspective

    ERIC Educational Resources Information Center

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  11. Comparison of Online Survey Recruitment Platforms for Hard-to-Reach Pregnant Smoking Populations: Feasibility Study

    PubMed Central

    Agas, Jessica Marie; Lee, Melissa; Pan, Julia Lily; Buttenheim, Alison Meredith

    2018-01-01

    Background Recruiting hard-to-reach populations for health research is challenging. Web-based platforms offer one way to recruit specific samples for research purposes, but little is known about the feasibility of online recruitment and the representativeness and comparability of samples recruited through different Web-based platforms. Objective The objectives of this study were to determine the feasibility of recruiting a hard-to-reach population (pregnant smokers) using 4 different Web-based platforms and to compare participants recruited through each platform. Methods A screener and survey were distributed online through Qualtrics Panel, Soapbox Sample, Reddit, and Amazon Mechanical Turk (mTurk). Descriptive statistics were used to summarize results of each recruitment platform, including eligibility yield, quality yield, income, race, age, and gestational age. Results Of the 3847 participants screened for eligibility across all 4 Web-based platforms, 535 were eligible and 308 completed the survey. Amazon mTurk yielded the fewest completed responses (n=9), 100% (9/9) of which passed several quality metrics verifying pregnancy and smoking status. Qualtrics Panel yielded 14 completed responses, 86% (12/14) of which passed the quality screening. Soapbox Sample produced 107 completed surveys, 67% (72/107) of which were found to be quality responses. Advertising through Reddit produced the highest completion rate (n=178), but only 29.2% (52/178) of those surveys passed the quality metrics. We found significant differences in eligibility yield, quality yield, age, number of previous pregnancies, age of smoking initiation, current smokers, race, education, and income (P<.001). Conclusions Although each platform successfully recruited pregnant smokers, results varied in quality, cost, and percentage of complete responses. Moving forward, investigators should pay careful attention to the percentage yield and cost of online recruitment platforms to maximize internal and external validity. PMID:29661751

  12. Public health dental hygiene: an option for improved quality of care and quality of life.

    PubMed

    Olmsted, Jodi L; Rublee, Nancy; Zurkawski, Emily; Kleber, Laura

    2013-10-01

    The purpose of this research was to document quality of life (QoL) and quality of care (QoC) measures for families receiving care from dental hygienists within public health departments, and to consider if oral health for families with economic disparities and cultural differences was improved. A descriptive research study using a retrospective record review was conducted considering QoC. A review of state epid "Do preventive oral health programs based in local health departments provide quality care services, thus impacting QoL for underserved populations?" A dental hygienist working in public health made significant contributions to improving access to care and QoL in a rural, socioeconomically disadvantaged community. A total of 2,364 children received education, 1,745 received oral screenings and 1,511 received dental sealants. Of these, 804 children with caries were referred, with 463 receiving restorations and follow-up care. QoL metrics basis assessed Health Outcomes & Health Determinants. Initial QoL data was ranked in the bottom half of the state, while 70% of original determinant data was also ranked in the bottom half of reported metrics. Dental hygienists in public health settings can positively affect patients offering preventive care outreach services. Education and sealant placement were considered effective as measured by access, delivery and, when required, referral for restorative care. Improvement in QoL for individuals was noted through improved health outcomes and determinant metrics.

  13. Implementation of a Clinical Documentation Improvement Curriculum Improves Quality Metrics and Hospital Charges in an Academic Surgery Department.

    PubMed

    Reyes, Cynthia; Greenbaum, Alissa; Porto, Catherine; Russell, John C

    2017-03-01

    Accurate clinical documentation (CD) is necessary for many aspects of modern health care, including excellent communication, quality metrics reporting, and legal documentation. New requirements have mandated adoption of ICD-10-CM coding systems, adding another layer of complexity to CD. A clinical documentation improvement (CDI) and ICD-10 training program was created for health care providers in our academic surgery department. We aimed to assess the impact of our CDI curriculum by comparing quality metrics, coding, and reimbursement before and after implementation of our CDI program. A CDI/ICD-10 training curriculum was instituted in September 2014 for all members of our university surgery department. The curriculum consisted of didactic lectures, 1-on-1 provider training, case reviews, e-learning modules, and CD queries from nurse CDI staff and hospital coders. Outcomes parameters included monthly documentation completion rates, severity of illness (SOI), risk of mortality (ROM), case-mix index (CMI), all-payer refined diagnosis-related groups (APR-DRG), and Surgical Care Improvement Program (SCIP) metrics. Financial gain from responses to CDI queries was determined retrospectively. Surgery department delinquent documentation decreased by 85% after CDI implementation. Compliance with SCIP measures improved from 85% to 97%. Significant increases in surgical SOI, ROM, CMI, and APR-DRG (all p < 0.01) were found after CDI/ICD-10 training implementation. Provider responses to CDI queries resulted in an estimated $4,672,786 increase in charges. Clinical documentation improvement/ICD-10 training in an academic surgery department is an effective method to improve documentation rates, increase the hospital estimated reimbursement based on more accurate CD, and provide better compliance with surgical quality measures. Copyright © 2016 American College of Surgeons. All rights reserved.

  14. Compensation of chief executive officers at nonprofit US hospitals.

    PubMed

    Joynt, Karen E; Le, Sidney T; Orav, E John; Jha, Ashish K

    2014-01-01

    Hospital chief executive officers (CEOs) can shape the priorities and performance of their organizations. The degree to which their compensation is based on their hospitals' quality performance is not well known. To characterize CEO compensation and examine its relation with quality metrics. Retrospective observational study. Participants included 1877 CEOs at 2681 private, nonprofit US hospitals. We used linear regression to identify hospital structural characteristics associated with CEO pay. We then determined the degree to which a hospital's performance on financial metrics, technologic metrics, quality metrics, and community benefit in 2008 was associated with CEO pay in 2009. The CEOs in our sample had a mean compensation of $595,781 (median, $404,938) in 2009. In multivariate analyses, CEO pay was associated with the number of hospital beds overseen ($550 for each additional bed; 95% CI, 429-671; P < .001), teaching status ($425,078 more at major teaching vs nonteaching hospitals; 95% CI, 315,238-534,918; P < .001), and urban location. Hospitals with high levels of advanced technologic capabilities compensated their CEOs $135,862 more (95% CI, 80,744-190,990; P < .001) than did hospitals with low levels of technology. Hospitals with high performance on patient satisfaction compensated their CEOs $51,706 more than did those with low performance on patient satisfaction (95% CI, 15,166-88,247; P = .006). We found no association between CEO pay and hospitals' margins, liquidity, capitalization, occupancy rates, process quality performance, mortality rates, readmission rates, or measures of community benefit. Compensation of CEOs at nonprofit hospitals was highly variable across the country. Compensation was associated with technology and patient satisfaction but not with processes of care, patient outcomes, or community benefit.

  15. Optimal colour quality of LED clusters based on memory colours.

    PubMed

    Smet, Kevin; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2011-03-28

    The spectral power distributions of tri- and tetrachromatic clusters of Light-Emitting-Diodes, composed of simulated and commercially available LEDs, were optimized with a genetic algorithm to maximize the luminous efficacy of radiation and the colour quality as assessed by the memory colour quality metric developed by the authors. The trade-off of the colour quality as assessed by the memory colour metric and the luminous efficacy of radiation was investigated by calculating the Pareto optimal front using the NSGA-II genetic algorithm. Optimal peak wavelengths and spectral widths of the LEDs were derived, and over half of them were found to be close to Thornton's prime colours. The Pareto optimal fronts of real LED clusters were always found to be smaller than those of the simulated clusters. The effect of binning on designing a real LED cluster was investigated and was found to be quite large. Finally, a real LED cluster of commercially available AlGaInP, InGaN and phosphor white LEDs was optimized to obtain a higher score on memory colour quality scale than its corresponding CIE reference illuminant.

  16. Monitoring Error Rates In Illumina Sequencing.

    PubMed

    Manley, Leigh J; Ma, Duanduan; Levine, Stuart S

    2016-12-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR's unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.

  17. Assessment of river quality in a subtropical Austral river system: a combined approach using benthic diatoms and macroinvertebrates

    NASA Astrophysics Data System (ADS)

    Nhiwatiwa, Tamuka; Dalu, Tatenda; Sithole, Tatenda

    2017-12-01

    River systems constitute areas of high human population densities owing to their favourable conditions for agriculture, water supply and transportation network. Despite human dependence on river systems, anthropogenic activities severely degrade water quality. The main aim of this study was to assess the river health of Ngamo River using diatom and macroinvertebrate community structure based on multivariate analyses and community metrics. Ammonia, pH, salinity, total phosphorus and temperature were found to be significantly different among the study seasons. The diatom and macroinvertebrate taxa richness increased downstream suggesting an improvement in water as we moved away from the pollution point sources. Canonical correspondence analyses identified nutrients (total nitrogen and reactive phosphorus) as important variables structuring diatom and macroinvertebrate community. The community metrics and diversity indices for both bioindicators highlighted that the water quality of the river system was very poor. These findings indicate that both methods can be used for water quality assessments, e.g. sewage and agricultural pollution, and they show high potential for use during water quality monitoring programmes in other regions.

  18. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  19. Health care reform: will quality remodeling affect obstetrician-gynecologists in addition to patients?

    PubMed

    von Gruenigen, Vivian E; Deveny, T Clifford

    2011-05-01

    The Patient Protection and Affordable Care Act is a federal statute that attempts to address many fundamental problems with the current health care system including the uninsured, rising health care costs, and quality care. Quality metrics have been in development for years (by private and governmental sectors), and momentum is growing. The purpose of this commentary is to explore quality changes in the way practicing obstetricians and gynecologists will be held accountable for quality service. Two new options being explored for health care, both focusing on improving quality and physician metrics, include value-based purchasing and accountable-care organizations. Both will likely consist of universal clinical algorithms and cost monitoring as measures. For obstetrics this will probably include physician's rates of cesarean deliveries and elective inductions. For gynecology this may comprise of indications for hysterectomy with documented failed medical management, minor surgical management, or both medical and minor surgical management. It is anticipated patients will no longer be able to request obstetric testing, pregnancy induction, or hysterectomy. It is imperative we, as obstetrician-gynecologists, are involved in health care reform that inevitably involves the care of women. The expectation is that the American Congress of Obstetricians and Gynecologists (ACOG) will further develop evidenced-based opinions and guidelines, as medical communities embrace ACOG documents and reference these in hospital policies and peer review.

  20. Using macroinvertebrate assemblages and multiple stressors to infer urban stream system condition: A case study in the central US

    USGS Publications Warehouse

    Nichols, John W.; Hubbart, Jason A.; Poulton, Barry C.

    2016-01-01

    Characterizing the impacts of hydrologic alterations, pollutants, and habitat degradation on macroinvertebrate species assemblages is of critical value for managers wishing to categorize stream ecosystem condition. A combination of approaches including trait-based metrics and traditional bioassessments provides greater information, particularly in anthropogenic stream ecosystems where traditional approaches can be confounded by variously interacting land use impacts. Macroinvertebrates were collected from two rural and three urban nested study sites in central Missouri, USA during the spring and fall seasons of 2011. Land use responses of conventional taxonomic and trait-based metrics were compared to streamflow indices, physical habitat metrics, and water quality indices. Results show that biotic index was significantly different (p < 0.05) between sites with differences detected in 54 % of trait-based metrics. The most consistent response to urbanization was observed in size metrics, with significantly (p < 0.05) fewer small bodied organisms. Increases in fine streambed sediment, decreased submerged woody rootmats, significantly higher winter Chloride concentrations, and decreased mean suspended sediment particle size in lower urban stream reaches also influenced macroinvertebrate assemblages. Riffle habitats in urban reaches contained 21 % more (p = 0.03) multivoltine organisms, which was positively correlated to the magnitude of peak flows (r2 = 0.91, p = 0.012) suggesting that high flow events may serve as a disturbance in those areas. Results support the use of macroinvertebrate assemblages and multiple stressors to characterize urban stream system condition and highlight the need to better understand the complex interactions of trait-based metrics and anthropogenic aquatic ecosystem stressors.

  1. JPEG2000 encoding with perceptual distortion control.

    PubMed

    Liu, Zhen; Karam, Lina J; Watson, Andrew B

    2006-07-01

    In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.

  2. Developing a set of consensus indicators to support maternity service quality improvement: using Core Outcome Set methodology including a Delphi process.

    PubMed

    Bunch, K J; Allin, B; Jolly, M; Hardie, T; Knight, M

    2018-05-16

    To develop a core metric set to monitor the quality of maternity care. Delphi process followed by a face-to-face consensus meeting. English maternity units. Three representative expert panels: service designers, providers and users. Maternity care metrics judged important by participants. Participants were asked to complete a two-phase Delphi process, scoring metrics from existing local maternity dashboards. A consensus meeting discussed the results and re-scored the metrics. In all, 125 distinct metrics across six domains were identified from existing dashboards. Following the consensus meeting, 14 metrics met the inclusion criteria for the final core set: smoking rate at booking; rate of birth without intervention; caesarean section delivery rate in Robson group 1 women; caesarean section delivery rate in Robson group 2 women; caesarean section delivery rate in Robson group 5 women; third- and fourth-degree tear rate among women delivering vaginally; rate of postpartum haemorrhage of ≥1500 ml; rate of successful vaginal birth after a single previous caesarean section; smoking rate at delivery; proportion of babies born at term with an Apgar score <7 at 5 minutes; proportion of babies born at term admitted to the neonatal intensive care unit; proportion of babies readmitted to hospital at <30 days of age; breastfeeding initiation rate; and breastfeeding rate at 6-8 weeks. Core outcome set methodology can be used to incorporate the views of key stakeholders in developing a core metric set to monitor the quality of care in maternity units, thus enabling improvement. Achieving consensus on core metrics for monitoring the quality of maternity care. © 2018 The Authors. BJOG: An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  3. Which Species Are We Researching and Why? A Case Study of the Ecology of British Breeding Birds.

    PubMed

    McKenzie, Ailsa J; Robertson, Peter A

    2015-01-01

    Our ecological knowledge base is extensive, but the motivations for research are many and varied, leading to unequal species representation and coverage. As this evidence is used to support a wide range of conservation, management and policy actions, it is important that gaps and biases are identified and understood. In this paper we detail a method for quantifying research effort and impact at the individual species level, and go on to investigate the factors that best explain between-species differences in outputs. We do this using British breeding birds as a case study, producing a ranked list of species based on two scientific publication metrics: total number of papers (a measure of research quantity) and h-index (a measure of the number of highly cited papers on a topic--an indication of research quality). Widespread, populous species which are native, resident and in receipt of biodiversity action plans produced significantly higher publication metrics. Guild was also significant, birds of prey the most studied group, with pigeons and doves the least studied. The model outputs for both metrics were very similar, suggesting that, at least in this example, research quantity and quality were highly correlated. The results highlight three key gaps in the evidence base, with fewer citations and publications relating to migrant breeders, introduced species and species which have experienced contractions in distribution. We suggest that the use of publication metrics in this way provides a novel approach to understanding the scale and drivers of both research quantity and impact at a species level and could be widely applied, both taxonomically and geographically.

  4. Which Species Are We Researching and Why? A Case Study of the Ecology of British Breeding Birds

    PubMed Central

    McKenzie, Ailsa J.; Robertson, Peter A.

    2015-01-01

    Our ecological knowledge base is extensive, but the motivations for research are many and varied, leading to unequal species representation and coverage. As this evidence is used to support a wide range of conservation, management and policy actions, it is important that gaps and biases are identified and understood. In this paper we detail a method for quantifying research effort and impact at the individual species level, and go on to investigate the factors that best explain between-species differences in outputs. We do this using British breeding birds as a case study, producing a ranked list of species based on two scientific publication metrics: total number of papers (a measure of research quantity) and h-index (a measure of the number of highly cited papers on a topic – an indication of research quality). Widespread, populous species which are native, resident and in receipt of biodiversity action plans produced significantly higher publication metrics. Guild was also significant, birds of prey the most studied group, with pigeons and doves the least studied. The model outputs for both metrics were very similar, suggesting that, at least in this example, research quantity and quality were highly correlated. The results highlight three key gaps in the evidence base, with fewer citations and publications relating to migrant breeders, introduced species and species which have experienced contractions in distribution. We suggest that the use of publication metrics in this way provides a novel approach to understanding the scale and drivers of both research quantity and impact at a species level and could be widely applied, both taxonomically and geographically. PMID:26154759

  5. Seismic Data Archive Quality Assurance -- Analytics Adding Value at Scale

    NASA Astrophysics Data System (ADS)

    Casey, R. E.; Ahern, T. K.; Sharer, G.; Templeton, M. E.; Weertman, B.; Keyson, L.

    2015-12-01

    Since the emergence of real-time delivery of seismic data over the last two decades, solutions for near-real-time quality analysis and station monitoring have been developed by data producers and data stewards. This has allowed for a nearly constant awareness of the quality of the incoming data and the general health of the instrumentation around the time of data capture. Modern quality assurance systems are evolving to provide ready access to a large variety of metrics, a rich and self-correcting history of measurements, and more importantly the ability to access these quality measurements en-masse through a programmatic interface.The MUSTANG project at the IRIS Data Management Center is working to achieve 'total archival data quality', where a large number of standardized metrics, some computationally expensive, are generated and stored for all data from decades past to the near present. To perform this on a 300 TB archive of compressed time series requires considerable resources in network I/O, disk storage, and CPU capacity to achieve scalability, not to mention the technical expertise to develop and maintain it. In addition, staff scientists are necessary to develop the system metrics and employ them to produce comprehensive and timely data quality reports to assist seismic network operators in maintaining their instrumentation. All of these metrics must be available to the scientist 24/7.We will present an overview of the MUSTANG architecture including the development of its standardized metrics code in R. We will show examples of the metrics values that we make publicly available to scientists and educators and show how we are sharing the algorithms used. We will also discuss the development of a capability that will enable scientific researchers to specify data quality constraints on their requests for data, providing only the data that is best suited to their area of study.

  6. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  7. Qualifying variability: patterns in water quality and biota from a long-term, multi-stream dataset

    Treesearch

    Camille Flinders; Douglas McLaughlin

    2016-01-01

    Effective water resources assessment and management requires quantitative information on the variability of ambient and biological conditions in aquatic communities. Although it is understood that natural systems are variable, robust estimates of variation in water quality and biotic endpoints (e.g. community-based structure and function metrics) are rare in US waters...

  8. An automated, quantitative, and case-specific evaluation of deformable image registration in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kierkels, R. G. J.; den Otter, L. A.; Korevaar, E. W.; Langendijk, J. A.; van der Schaaf, A.; Knopf, A. C.; Sijtsema, N. M.

    2018-02-01

    A prerequisite for adaptive dose-tracking in radiotherapy is the assessment of the deformable image registration (DIR) quality. In this work, various metrics that quantify DIR uncertainties are investigated using realistic deformation fields of 26 head and neck and 12 lung cancer patients. Metrics related to the physiologically feasibility (the Jacobian determinant, harmonic energy (HE), and octahedral shear strain (OSS)) and numerically robustness of the deformation (the inverse consistency error (ICE), transitivity error (TE), and distance discordance metric (DDM)) were investigated. The deformable registrations were performed using a B-spline transformation model. The DIR error metrics were log-transformed and correlated (Pearson) against the log-transformed ground-truth error on a voxel level. Correlations of r  ⩾  0.5 were found for the DDM and HE. Given a DIR tolerance threshold of 2.0 mm and a negative predictive value of 0.90, the DDM and HE thresholds were 0.49 mm and 0.014, respectively. In conclusion, the log-transformed DDM and HE can be used to identify voxels at risk for large DIR errors with a large negative predictive value. The HE and/or DDM can therefore be used to perform automated quality assurance of each CT-based DIR for head and neck and lung cancer patients.

  9. Model-based color halftoning using direct binary search.

    PubMed

    Agar, A Ufuk; Allebach, Jan P

    2005-12-01

    In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.

  10. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  11. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    PubMed

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  12. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors

    PubMed Central

    Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel

    2017-01-01

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065

  13. The accurate assessment of small-angle X-ray scattering data

    DOE PAGES

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, makingmore » it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. Results: There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. Conclusions: The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.« less

  15. State of the art metrics for aspect oriented programming

    NASA Astrophysics Data System (ADS)

    Ghareb, Mazen Ismaeel; Allen, Gary

    2018-04-01

    The quality evaluation of software, e.g., defect measurement, gains significance with higher use of software applications. Metric measurements are considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality indicators for novel development approaches such as Aspect Oriented Programming (AOP). AOP intends to enhance programming quality, by providing new and novel constructs for the development of systems, for example, point cuts, advice and inter-type relationships. Hence, it is not evident if quality pointers for AOP can be derived from direct expansions of traditional OO measurements. Then again, investigations of AOP do regularly depend on established coupling measurements. Notwithstanding the late reception of AOP in empirical studies, coupling measurements have been adopted as useful markers of flaw inclination in this context. In this paper we will investigate the state of the art metrics for measurement of Aspect Oriented systems development.

  16. Biotic, water-quality, and hydrologic metrics calculated for the analysis of temporal trends in National Water Quality Assessment Program Data in the Western United States

    USGS Publications Warehouse

    Wiele, Stephen M.; Brasher, Anne M.D.; Miller, Matthew P.; May, Jason T.; Carpenter, Kurt D.

    2012-01-01

    The U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program was established by Congress in 1991 to collect long-term, nationally consistent information on the quality of the Nation's streams and groundwater. The NAWQA Program utilizes interdisciplinary and dynamic studies that link the chemical and physical conditions of streams (such as flow and habitat) with ecosystem health and the biologic condition of algae, aquatic invertebrates, and fish communities. This report presents metrics derived from NAWQA data and the U.S. Geological Survey streamgaging network for sampling sites in the Western United States, as well as associated chemical, habitat, and streamflow properties. The metrics characterize the conditions of algae, aquatic invertebrates, and fish. In addition, we have compiled climate records and basin characteristics related to the NAWQA sampling sites. The calculated metrics and compiled data can be used to analyze ecohydrologic trends over time.

  17. Multivariate Analyses of Quality Metrics for Crystal Structures in the PDB Archive.

    PubMed

    Shao, Chenghua; Yang, Huanwang; Westbrook, John D; Young, Jasmine Y; Zardecki, Christine; Burley, Stephen K

    2017-03-07

    Following deployment of an augmented validation system by the Worldwide Protein Data Bank (wwPDB) partnership, the quality of crystal structures entering the PDB has improved. Of significance are improvements in quality measures now prominently displayed in the wwPDB validation report. Comparisons of PDB depositions made before and after introduction of the new reporting system show improvements in quality measures relating to pairwise atom-atom clashes, side-chain torsion angle rotamers, and local agreement between the atomic coordinate structure model and experimental electron density data. These improvements are largely independent of resolution limit and sample molecular weight. No significant improvement in the quality of associated ligands was observed. Principal component analysis revealed that structure quality could be summarized with three measures (Rfree, real-space R factor Z score, and a combined molecular geometry quality metric), which can in turn be reduced to a single overall quality metric readily interpretable by all PDB archive users. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A study of image quality for radar image processing. [synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.

    1982-01-01

    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.

  19. SU-E-T-436: Fluence-Based Trajectory Optimization for Non-Coplanar VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smyth, G; Bamber, JC; Bedford, JL

    2015-06-15

    Purpose: To investigate a fluence-based trajectory optimization technique for non-coplanar VMAT for brain cancer. Methods: Single-arc non-coplanar VMAT trajectories were determined using a heuristic technique for five patients. Organ at risk (OAR) volume intersected during raytracing was minimized for two cases: absolute volume and the sum of relative volumes weighted by OAR importance. These trajectories and coplanar VMAT formed starting points for the fluence-based optimization method. Iterative least squares optimization was performed on control points 24° apart in gantry rotation. Optimization minimized the root-mean-square (RMS) deviation of PTV dose from the prescription (relative importance 100), maximum dose to the brainstemmore » (10), optic chiasm (5), globes (5) and optic nerves (5), plus mean dose to the lenses (5), hippocampi (3), temporal lobes (2), cochleae (1) and brain excluding other regions of interest (1). Control point couch rotations were varied in steps of up to 10° and accepted if the cost function improved. Final treatment plans were optimized with the same objectives in an in-house planning system and evaluated using a composite metric - the sum of optimization metrics weighted by importance. Results: The composite metric decreased with fluence-based optimization in 14 of the 15 plans. In the remaining case its overall value, and the PTV and OAR components, were unchanged but the balance of OAR sparing differed. PTV RMS deviation was improved in 13 cases and unchanged in two. The OAR component was reduced in 13 plans. In one case the OAR component increased but the composite metric decreased - a 4 Gy increase in OAR metrics was balanced by a reduction in PTV RMS deviation from 2.8% to 2.6%. Conclusion: Fluence-based trajectory optimization improved plan quality as defined by the composite metric. While dose differences were case specific, fluence-based optimization improved both PTV and OAR dosimetry in 80% of cases.« less

  20. Improving benchmarking by using an explicit framework for the development of composite indicators: an example using pediatric quality of care

    PubMed Central

    2010-01-01

    Background The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators. Objective To present a conceptual framework to develop comprehensive, robust, and transparent composite indicators of pediatric care quality, and to highlight aspects specific to quality measurement in children. Methods We reviewed the scientific literature on composite indicator development, health systems, and quality measurement in the pediatric healthcare setting. Frameworks were selected for explicitness and applicability to a hospital-based measurement system. Results We synthesized various frameworks into a comprehensive model for the development of composite indicators of quality of care. Among its key premises, the model proposes identifying structural, process, and outcome metrics for each of the Institute of Medicine's six domains of quality (safety, effectiveness, efficiency, patient-centeredness, timeliness, and equity) and presents a step-by-step framework for embedding the quality of care measurement model into composite indicator development. Conclusions The framework presented offers researchers an explicit path to composite indicator development. Without a scientifically robust and comprehensive approach to measurement of the quality of healthcare, performance measurement will ultimately fail to achieve its quality improvement goals. PMID:20181129

  1. Metrics for Assessing the Quality of Groundwater Used for Public Supply, CA, USA: Equivalent-Population and Area.

    PubMed

    Belitz, Kenneth; Fram, Miranda S; Johnson, Tyler D

    2015-07-21

    Data from 11,000 public supply wells in 87 study areas were used to assess the quality of nearly all of the groundwater used for public supply in California. Two metrics were developed for quantifying groundwater quality: area with high concentrations (km(2) or proportion) and equivalent-population relying upon groundwater with high concentrations (number of people or proportion). Concentrations are considered high if they are above a human-health benchmark. When expressed as proportions, the metrics are area-weighted and population-weighted detection frequencies. On a statewide-scale, about 20% of the groundwater used for public supply has high concentrations for one or more constituents (23% by area and 18% by equivalent-population). On the basis of both area and equivalent-population, trace elements are more prevalent at high concentrations than either nitrate or organic compounds at the statewide-scale, in eight of nine hydrogeologic provinces, and in about three-quarters of the study areas. At a statewide-scale, nitrate is more prevalent than organic compounds based on area, but not on the basis of equivalent-population. The approach developed for this paper, unlike many studies, recognizes the importance of appropriately weighting information when changing scales, and is broadly applicable to other areas.

  2. A scoring metric for multivariate data for reproducibility analysis using chemometric methods

    PubMed Central

    Sheen, David A.; de Carvalho Rocha, Werickson Fortunato; Lippa, Katrice A.; Bearden, Daniel W.

    2017-01-01

    Process quality control and reproducibility in emerging measurement fields such as metabolomics is normally assured by interlaboratory comparison testing. As a part of this testing process, spectral features from a spectroscopic method such as nuclear magnetic resonance (NMR) spectroscopy are attributed to particular analytes within a mixture, and it is the metabolite concentrations that are returned for comparison between laboratories. However, data quality may also be assessed directly by using binned spectral data before the time-consuming identification and quantification. Use of the binned spectra has some advantages, including preserving information about trace constituents and enabling identification of process difficulties. In this paper, we demonstrate the use of binned NMR spectra to conduct a detailed interlaboratory comparison and composition analysis. Spectra of synthetic and biologically-obtained metabolite mixtures, taken from a previous interlaboratory study, are compared with cluster analysis using a variety of distance and entropy metrics. The individual measurements are then evaluated based on where they fall within their clusters, and a laboratory-level scoring metric is developed, which provides an assessment of each laboratory’s individual performance. PMID:28694553

  3. SU-E-T-616: Plan Quality Assessment of Both Treatment Planning System Dose and Measurement-Based 3D Reconstructed Dose in the Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olch, A

    2015-06-15

    Purpose: Systematic radiotherapy plan quality assessment promotes quality improvement. Software tools can perform this analysis by applying site-specific structure dose metrics. The next step is to similarly evaluate the quality of the dose delivery. This study defines metrics for acceptable doses to targets and normal organs for a particular treatment site and scores each plan accordingly. The input can be the TPS or the measurement-based 3D patient dose. From this analysis, one can determine whether the delivered dose distribution to the patient receives a score which is comparable to the TPS plan score, otherwise replanning may be indicated. Methods: Elevenmore » neuroblastoma patient plans were exported from Eclipse to the Quality Reports program. A scoring algorithm defined a score for each normal and target structure based on dose-volume parameters. Each plan was scored by this algorithm and the percentage of total possible points was obtained. Each plan also underwent IMRT QA measurements with a Mapcheck2 or ArcCheck. These measurements were input into the 3DVH program to compute the patient 3D dose distribution which was analyzed using the same scoring algorithm as the TPS plan. Results: The mean quality score for the TPS plans was 75.37% (std dev=14.15%) compared to 71.95% (std dev=13.45%) for the 3DVH dose distribution. For 3/11 plans, the 3DVH-based quality score was higher than the TPS score, by between 0.5 to 8.4 percentage points. Eight/11 plans scores decreased based on IMRT QA measurements by 1.2 to 18.6 points. Conclusion: Software was used to determine the degree to which the plan quality score differed between the TPS and measurement-based dose. Although the delivery score was generally in good agreement with the planned dose score, there were some that improved while there was one plan whose delivered dose quality was significantly less than planned. This methodology helps evaluate both planned and delivered dose quality. Sun Nuclear Corporation has provded a license for the software described.« less

  4. The use of alternative pollutant metrics in time-series studies of ambient air pollution and respiratory emergency department visits.

    PubMed

    Darrow, Lyndsey A; Klein, Mitchel; Sarnat, Jeremy A; Mulholland, James A; Strickland, Matthew J; Sarnat, Stefanie E; Russell, Armistead G; Tolbert, Paige E

    2011-01-01

    Various temporal metrics of daily pollution levels have been used to examine the relationships between air pollutants and acute health outcomes. However, daily metrics of the same pollutant have rarely been systematically compared within a study. In this analysis, we describe the variability of effect estimates attributable to the use of different temporal metrics of daily pollution levels. We obtained hourly measurements of ambient particulate matter (PM₂.₅), carbon monoxide (CO), nitrogen dioxide (NO₂), and ozone (O₃) from air monitoring networks in 20-county Atlanta for the time period 1993-2004. For each pollutant, we created (1) a daily 1-h maximum; (2) a 24-h average; (3) a commute average; (4) a daytime average; (5) a nighttime average; and (6) a daily 8-h maximum (only for O₃). Using Poisson generalized linear models, we examined associations between daily counts of respiratory emergency department visits and the previous day's pollutant metrics. Variability was greatest across O₃ metrics, with the 8-h maximum, 1-h maximum, and daytime metrics yielding strong positive associations and the nighttime O₃ metric yielding a negative association (likely reflecting confounding by air pollutants oxidized by O₃). With the exception of daytime metric, all of the CO and NO₂ metrics were positively associated with respiratory emergency department visits. Differences in observed associations with respiratory emergency room visits among temporal metrics of the same pollutant were influenced by the diurnal patterns of the pollutant, spatial representativeness of the metrics, and correlation between each metric and copollutant concentrations. Overall, the use of metrics based on the US National Ambient Air Quality Standards (for example, the use of a daily 8-h maximum O₃ as opposed to a 24-h average metric) was supported by this analysis. Comparative analysis of temporal metrics also provided insight into underlying relationships between specific air pollutants and respiratory health.

  5. Examination of the properties of IMRT and VMAT beams and evaluation against pre-treatment quality assurance results

    NASA Astrophysics Data System (ADS)

    Crowe, S. B.; Kairn, T.; Middlebrook, N.; Sutherland, B.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2015-03-01

    This study aimed to provide a detailed evaluation and comparison of a range of modulated beam evaluation metrics, in terms of their correlation with QA testing results and their variation between treatment sites, for a large number of treatments. Ten metrics including the modulation index (MI), fluence map complexity, modulation complexity score (MCS), mean aperture displacement (MAD) and small aperture score (SAS) were evaluated for 546 beams from 122 intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) treatment plans targeting the anus, rectum, endometrium, brain, head and neck and prostate. The calculated sets of metrics were evaluated in terms of their relationships to each other and their correlation with the results of electronic portal imaging based quality assurance (QA) evaluations of the treatment beams. Evaluation of the MI, MAD and SAS suggested that beams used in treatments of the anus, rectum, head and neck were more complex than the prostate and brain treatment beams. Seven of the ten beam complexity metrics were found to be strongly correlated with the results from QA testing of the IMRT beams (p < 0.00008). For example, values of SAS (with multileaf collimator apertures narrower than 10 mm defined as ‘small’) less than 0.2 also identified QA passing IMRT beams with 100% specificity. However, few of the metrics are correlated with the results from QA testing of the VMAT beams, whether they were evaluated as whole 360° arcs or as 60° sub-arcs. Select evaluation of beam complexity metrics (at least MI, MCS and SAS) is therefore recommended, as an intermediate step in the IMRT QA chain. Such evaluation may also be useful as a means of periodically reviewing VMAT planning or optimiser performance.

  6. Changes in biological communities of the Fountain Creek Basin, Colorado, 2003–2016, in relation to antecedent streamflow, water quality, and habitat

    USGS Publications Warehouse

    Roberts, James J.; Bruce, James F.; Zuellig, Robert E.

    2018-01-08

    The analysis described in this report is part of a longterm project monitoring the biological communities, habitat, and water quality of the Fountain Creek Basin. Biology, habitat, and water-quality data have been collected at 10 sites since 2003. These data include annual samples of aquatic invertebrate communities, fish communities, water quality, and quantitative riverine habitat. This report examines trends in biological communities from 2003 to 2016 and explores relationships between biological communities and abiotic variables (antecedent streamflow, physical habitat, and water quality). Six biological metrics (three invertebrate and three fish) and four individual fish species were used to examine trends in these data and how streamflow, habitat, and (or) water quality may explain these trends. The analysis of 79 trends shows that the majority of significant trends decreased over the trend period. Overall, 19 trends before adjustments for streamflow in the fish (12) and invertebrate (7) metrics were all decreasing except for the metric Invertebrate Species Richness at the most upstream site in Monument Creek. Seven of these trends were explained by streamflow and four trends were revealed that were originally masked by variability in antecedent streamflow. Only two sites (Jimmy Camp Creek at Fountain, CO and Fountain Creek near Pinon, CO) had no trends in the fish or invertebrate metrics. Ten of the streamflow-adjusted trends were explained by habitat, one was explained by water quality, and five were not explained by any of the variables that were tested. Overall, from 2003 to 2016, all the fish metric trends were decreasing with an average decline of 40 percent, and invertebrate metrics decreased on average by 9.5 percent. A potential peak streamflow threshold was identified above which there is severely limited production of age-0 flathead chub (Platygobio gracilis).

  7. Comparison of clinical and physics scoring of PET images when image reconstruction parameters are varied.

    PubMed

    Walsh, C; Johnston, C; Sheehy, N; O' Reilly, G

    2013-02-01

    In this study the quantitative and qualitative image quality (IQ) measurements with clinical judgement of IQ in positron emission tomography (PET) were compared. The limitations of IQ metrics and the proposed criteria of acceptability for PET scanners are discussed. Phantom and patient images were reconstructed using seven different iterative reconstruction protocols. For each reconstructed set of images, IQ was scored based both on the visual analysis and on the quantitative metrics. The quantitative physics metrics did not rank the reconstruction protocols in the same order as the clinicians' scoring of perceived IQ (R(s)=-0.54). Better agreement was achieved when comparing the clinical perception of IQ to the physicist's visual assessment of IQ in the phantom images (R(s)=+0.59). The closest agreement was seen between the quantitative physics metrics and the measurement of the standard uptake values (SUVs) in small tumours (R(s)=+0.92). Given the disparity between the clinical perception of IQ and the physics metrics a cautious approach to use of IQ measurements for determining suspension levels is warranted.

  8. Comparing de novo genome assembly: the long and short of it.

    PubMed

    Narzisi, Giuseppe; Mishra, Bud

    2011-04-29

    Recent advances in DNA sequencing technology and their focal role in Genome Wide Association Studies (GWAS) have rekindled a growing interest in the whole-genome sequence assembly (WGSA) problem, thereby, inundating the field with a plethora of new formalizations, algorithms, heuristics and implementations. And yet, scant attention has been paid to comparative assessments of these assemblers' quality and accuracy. No commonly accepted and standardized method for comparison exists yet. Even worse, widely used metrics to compare the assembled sequences emphasize only size, poorly capturing the contig quality and accuracy. This paper addresses these concerns: it highlights common anomalies in assembly accuracy through a rigorous study of several assemblers, compared under both standard metrics (N50, coverage, contig sizes, etc.) as well as a more comprehensive metric (Feature-Response Curves, FRC) that is introduced here; FRC transparently captures the trade-offs between contigs' quality against their sizes. For this purpose, most of the publicly available major sequence assemblers--both for low-coverage long (Sanger) and high-coverage short (Illumina) reads technologies--are compared. These assemblers are applied to microbial (Escherichia coli, Brucella, Wolbachia, Staphylococcus, Helicobacter) and partial human genome sequences (Chr. Y), using sequence reads of various read-lengths, coverages, accuracies, and with and without mate-pairs. It is hoped that, based on these evaluations, computational biologists will identify innovative sequence assembly paradigms, bioinformaticists will determine promising approaches for developing "next-generation" assemblers, and biotechnologists will formulate more meaningful design desiderata for sequencing technology platforms. A new software tool for computing the FRC metric has been developed and is available through the AMOS open-source consortium.

  9. Digital Elevation Model from Non-Metric Camera in Uas Compared with LIDAR Technology

    NASA Astrophysics Data System (ADS)

    Dayamit, O. M.; Pedro, M. F.; Ernesto, R. R.; Fernando, B. L.

    2015-08-01

    Digital Elevation Model (DEM) data as a representation of surface topography is highly demanded for use in spatial analysis and modelling. Aimed to that issue many methods of acquisition data and process it are developed, from traditional surveying until modern technology like LIDAR. On the other hands, in a past four year the development of Unamend Aerial System (UAS) aimed to Geomatic bring us the possibility to acquire data about surface by non-metric digital camera on board in a short time with good quality for some analysis. Data collectors have attracted tremendous attention on UAS due to possibility of the determination of volume changes over time, monitoring of the breakwaters, hydrological modelling including flood simulation, drainage networks, among others whose support in DEM for proper analysis. The DEM quality is considered as a combination of DEM accuracy and DEM suitability so; this paper is aimed to analyse the quality of the DEM from non-metric digital camera on UAS compared with a DEM from LIDAR corresponding to same geographic space covering 4 km2 in Artemisa province, Cuba. This area is in a frame of urban planning whose need to know the topographic characteristics in order to analyse hydrology behaviour and decide the best place for make roads, building and so on. Base on LIDAR technology is still more accurate method, it offer us a pattern for test DEM from non-metric digital camera on UAS, whose are much more flexible and bring a solution for many applications whose needs DEM of detail.

  10. The Creation of a Pediatric Hospital Medicine Dashboard: Performance Assessment for Improvement.

    PubMed

    Fox, Lindsay Anne; Walsh, Kathleen E; Schainker, Elisabeth G

    2016-07-01

    Leaders of pediatric hospital medicine (PHM) recommended a clinical dashboard to monitor clinical practice and make improvements. To date, however, no programs report implementing a dashboard including the proposed broad range of metrics across multiple sites. We sought to (1) develop and populate a clinical dashboard to demonstrate productivity, quality, group sustainability, and value added for an academic division of PHM across 4 inpatient sites; (2) share dashboard data with division members and administrations to improve performance and guide program development; and (3) revise the dashboard to optimize its utility. Division members proposed a dashboard based on PHM recommendations. We assessed feasibility of data collection and defined and modified metrics to enable collection of comparable data across sites. We gathered data and shared the results with division members and administrations. We collected quarterly and annual data from October 2011 to September 2013. We found comparable metrics across all sites for descriptive, productivity, group sustainability, and value-added domains; only 72% of all quality metrics were tracked in a comparable fashion. After sharing the data, we saw increased timeliness of nursery discharges and an increase in hospital committee participation and grant funding. PHM dashboards have the potential to guide program development, mobilize faculty to improve care, and demonstrate program value to stakeholders. Dashboard implementation at other institutions and data sharing across sites may help to better define and strengthen the field of PHM by creating benchmarks and help improve the quality of pediatric hospital care. Copyright © 2016 by the American Academy of Pediatrics.

  11. Episode-Based Payment for Perinatal Care in Medicaid: Implications for Practice and Policy.

    PubMed

    Jarlenski, Marian; Borrero, Sonya; La Charité, Trey; Zite, Nikki B

    2016-06-01

    Medicaid is an important source of health insurance coverage for low-income pregnant women and covers nearly half of all deliveries in the United States. In the face of budgetary pressures, several state Medicaid programs have implemented or are considering implementing episode-based payments for perinatal care. Under the episode-based payment model, Medicaid programs make a single payment for all pregnancy-related medical services provided to women with low- and medium-risk pregnancies from 40 weeks before delivery through 60 days postpartum. The health care provider who delivers a live birth is assigned responsibility for all care and must meet certain quality metrics and stay within delineated cost-per-episode parameters. Implementation of cost- and quality-dependent episode-based payments for perinatal care is notable because there is no published evidence about the effects of such initiatives on pregnancy or birth outcomes. In this article, we highlight challenges and potential adverse consequences related to defining the perinatal episode and assigning a responsible health care provider. We also describe concerns that perinatal care quality metrics may not address the most pressing health care issues that are likely to improve health outcomes and reduce costs. In their current incarnations, Medicaid programs' episode-based payments for perinatal care may not improve perinatal care delivery and subsequent health outcomes. Rigorous evaluation of the new episode-based payment initiatives is critically needed to inform policymakers about the intended and unintended consequences of implementing episode-based payments for perinatal care.

  12. Development and application of a soil organic matter-based soil quality index in mineralized terrane of the Western US

    Treesearch

    S. W. Blecker; L. L. Stillings; M. C. Amacher; J. A. Ippolito; N. M. DeCrappeo

    2012-01-01

    Soil quality indices provide a means of distilling large amounts of data into a single metric that evaluates the soil's ability to carry out key ecosystem functions. Primarily developed in agroecosytems, then forested ecosystems, an index using the relation between soil organic matter and other key soil properties in more semi-arid systems of the Western US...

  13. WATER QUALITY VULNERABILITY IN THE OZARKS USING LANDSCAPE ECOLOGY METRICS: UPPER WHITE RIVER BROWSER (V2.0)

    EPA Science Inventory

    The principal focus of this project is the mapping and interpretation of landscape scale (i.e., broad scale) ecological metrics among contributing watersheds of the Upper White River, and the development of geospatial models of water quality vulnerability for several suspected no...

  14. Performance evaluation of objective quality metrics for HDR image compression

    NASA Astrophysics Data System (ADS)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  15. Weighing the impact (factor) of publishing in veterinary journals.

    PubMed

    Christopher, Mary M

    2015-06-01

    The journal in which you publish your research can have a major influence on the perceived value of your work and on your ability to reach certain audiences. The impact factor, a widely used metric of journal quality and prestige, has evolved into a benchmark of quality for institutions and graduate programs and, inappropriately, as a proxy for the quality of individual authors and articles, affecting tenure, promotion, and funding decisions. As a result, despite its many limitations, publishing decisions by authors often are based solely on a journal's impact factor. This can disadvantage journals in small disciplines, such as veterinary medicine, and limit the ability of authors to reach key audiences. In this article, factors that can influence the impact factor of a journal and its applicability, including precision, citation practices, article type, editorial policies, and size of the research community will be reviewed. The value and importance of veterinary journals such as the Journal of Veterinary Cardiology for reaching relevant audiences and for helping shape disciplinary specialties and influence clinical practice will also be discussed. Lastly, the efforts underway to develop alternative measures to assess the scientific quality of individual authors and articles, such as article-level metrics, as well as institutional measures of the economic and social impact of biomedical research will be considered. Judicious use of the impact factor and the implementation of new metrics for assessing the quality and societal relevance of veterinary research articles will benefit both authors and journals. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  17. Center to Advance Palliative Care palliative care clinical care and customer satisfaction metrics consensus recommendations.

    PubMed

    Weissman, David E; Morrison, R Sean; Meier, Diane E

    2010-02-01

    Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.

  18. The role of complexity metrics in a multi-institutional dosimetry audit of VMAT

    PubMed Central

    Agnew, Christina E; Hussein, Mohammad; Tsang, Yatman; McWilliam, Alan; Hounsell, Alan R; Clark, Catharine H

    2016-01-01

    Objective: To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery. Methods: 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius® phantom and seven29® 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared. Results: For Varian® linear accelerators (Varian® Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = −0.84, p < 0.01). Conclusion: MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality. Advances in knowledge: Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery. PMID:26511276

  19. Investigation of Two Models to Set and Evaluate Quality Targets for HbA1c: Biological Variation and Sigma-metrics

    PubMed Central

    Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R.; Roglic, Gojka; Sacks, David B.; Takei, Izumi

    2016-01-01

    Background A major objective of the IFCC Task Force on implementation of HbA1c standardization is to develop a model to define quality targets for HbA1c. Methods Two generic models, the Biological Variation and Sigma-metrics model, are investigated. Variables in the models were selected for HbA1c and data of EQA/PT programs were used to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. Results In the biological variation model 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the Sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP) 77% of the individual laboratories and 12 of 26 instrument groups met the 2 sigma criterion. Conclusion The Biological Variation and Sigma-metrics model were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The Sigma-metrics model is more flexible as both the TAE and the risk of failure can be adjusted to requirements related to e.g. use for diagnosis/monitoring or requirements of (inter)national authorities. With the aim of reaching international consensus on advice regarding quality targets for HbA1c, the Task Force suggests the Sigma-metrics model as the model of choice with default values of 5 mmol/mol (0.46%) for TAE, and risk levels of 2 and 4 sigma for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. PMID:25737535

  20. The role of complexity metrics in a multi-institutional dosimetry audit of VMAT.

    PubMed

    McGarry, Conor K; Agnew, Christina E; Hussein, Mohammad; Tsang, Yatman; McWilliam, Alan; Hounsell, Alan R; Clark, Catharine H

    2016-01-01

    To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery. 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius(®) phantom and seven29(®) 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared. For Varian(®) linear accelerators (Varian(®) Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = -0.84, p < 0.01). MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality. Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery.

  1. On the performance of metrics to predict quality in point cloud representations

    NASA Astrophysics Data System (ADS)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  2. The role of metrics and measurements in a software intensive total quality management environment

    NASA Technical Reports Server (NTRS)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  3. Roles for specialty societies and vascular surgeons in accountable care organizations

    PubMed Central

    Goodney, Philip P.; Fisher, Elliott S.; Cambria, Richard P.

    2012-01-01

    With the passage of the Affordable Care Act, accountable care organizations (ACOs) represent a new paradigm in healthcare payment reform. Designed to limit growth in spending while preserving quality, these organizations aim to incant physicians to lower costs by returning a portion of the savings realized by cost-effective, evidence-based care back to the ACO. In this review, first, we will explore the development of ACOs within the context of prior attempts to control Medicare spending, such as the sustainable growth rate and managed care organizations. Second, we describe the evolution of ACOs, the demonstration projects that established their feasibility, and their current organizational structure. Third, because quality metrics are central to the use and implementation of ACOs, we describe current efforts to design, collect, and interpret quality metrics in vascular surgery. And fourth, because a “seat at the table” will be an important key to success for vascular surgeons in these efforts, we discuss how vascular surgeons can participate and lead efforts within ACOs. PMID:22370029

  4. Conceptual model of comprehensive research metrics for improved human health and environment.

    PubMed

    Engel-Cox, Jill A; Van Houten, Bennett; Phelps, Jerry; Rose, Shyanika W

    2008-05-01

    Federal, state, and private research agencies and organizations have faced increasing administrative and public demand for performance measurement. Historically, performance measurement predominantly consisted of near-term outputs measured through bibliometrics. The recent focus is on accountability for investment based on long-term outcomes. Developing measurable outcome-based metrics for research programs has been particularly challenging, because of difficulty linking research results to spatially and temporally distant outcomes. Our objective in this review is to build a logic model and associated metrics through which to measure the contribution of environmental health research programs to improvements in human health, the environment, and the economy. We used expert input and literature research on research impact assessment. With these sources, we developed a logic model that defines the components and linkages between extramural environmental health research grant programs and the outputs and outcomes related to health and social welfare, environmental quality and sustainability, economics, and quality of life. The logic model focuses on the environmental health research portfolio of the National Institute of Environmental Health Sciences (NIEHS) Division of Extramural Research and Training. The model delineates pathways for contributions by five types of institutional partners in the research process: NIEHS, other government (federal, state, and local) agencies, grantee institutions, business and industry, and community partners. The model is being applied to specific NIEHS research applications and the broader research community. We briefly discuss two examples and discuss the strengths and limits of outcome-based evaluation of research programs.

  5. Improvement of Reliability of Diffusion Tensor Metrics in Thigh Skeletal Muscles.

    PubMed

    Keller, Sarah; Chhabra, Avneesh; Ahmed, Shaheen; Kim, Anne C; Chia, Jonathan M; Yamamura, Jin; Wang, Zhiyue J

    2018-05-01

    Quantitative diffusion tensor imaging (DTI) of skeletal muscles is challenging due to the bias in DTI metrics, such as fractional anisotropy (FA) and mean diffusivity (MD), related to insufficient signal-to-noise ratio (SNR). This study compares the bias of DTI metrics in skeletal muscles via pixel-based and region-of-interest (ROI)-based analysis. DTI of the thigh muscles was conducted on a 3.0-T system in N = 11 volunteers using a fat-suppressed single-shot spin-echo echo planar imaging (SS SE-EPI) sequence with eight repetitions (number of signal averages (NSA) = 4 or 8 for each repeat). The SNR was calculated for different NSAs and estimated for the composite images combining all data (effective NSA = 48) as standard reference. The bias of MD and FA derived by pixel-based and ROI-based quantification were compared at different NSAs. An "intra-ROI diffusion direction dispersion angle (IRDDDA)" was calculated to assess the uniformity of diffusion within the ROI. Using our standard reference image with NSA = 48, the ROI-based and pixel-based measurements agreed for FA and MD. Larger disagreements were observed for the pixel-based quantification at NSA = 4. MD was less sensitive than FA to the noise level. The IRDDDA decreased with higher NSA. At NSA = 4, ROI-based FA showed a lower average bias (0.9% vs. 37.4%) and narrower 95% limits of agreement compared to the pixel-based method. The ROI-based estimation of FA is less prone to bias than the pixel-based estimations when SNR is low. The IRDDDA can be applied as a quantitative quality measure to assess reliability of ROI-based DTI metrics. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  7. A lighting metric for quantitative evaluation of accent lighting systems

    NASA Astrophysics Data System (ADS)

    Acholo, Cyril O.; Connor, Kenneth A.; Radke, Richard J.

    2014-09-01

    Accent lighting is critical for artwork and sculpture lighting in museums, and subject lighting for stage, Film and television. The research problem of designing effective lighting in such settings has been revived recently with the rise of light-emitting-diode-based solid state lighting. In this work, we propose an easy-to-apply quantitative measure of the scene's visual quality as perceived by human viewers. We consider a well-accent-lit scene as one which maximizes the information about the scene (in an information-theoretic sense) available to the user. We propose a metric based on the entropy of the distribution of colors, which are extracted from an image of the scene from the viewer's perspective. We demonstrate that optimizing the metric as a function of illumination configuration (i.e., position, orientation, and spectral composition) results in natural, pleasing accent lighting. We use a photorealistic simulation tool to validate the functionality of our proposed approach, showing its successful application to two- and three-dimensional scenes.

  8. A comprehensive quality control workflow for paired tumor-normal NGS experiments.

    PubMed

    Schroeder, Christopher M; Hilke, Franz J; Löffler, Markus W; Bitzer, Michael; Lenz, Florian; Sturm, Marc

    2017-06-01

    Quality control (QC) is an important part of all NGS data analysis stages. Many available tools calculate QC metrics from different analysis steps of single sample experiments (raw reads, mapped reads and variant lists). Multi-sample experiments, as sequencing of tumor-normal pairs, require additional QC metrics to ensure validity of results. These multi-sample QC metrics still lack standardization. We therefore suggest a new workflow for QC of DNA sequencing of tumor-normal pairs. With this workflow well-known single-sample QC metrics and additional metrics specific for tumor-normal pairs can be calculated. The segmentation into different tools offers a high flexibility and allows reuse for other purposes. All tools produce qcML, a generic XML format for QC of -omics experiments. qcML uses quality metrics defined in an ontology, which was adapted for NGS. All QC tools are implemented in C ++ and run both under Linux and Windows. Plotting requires python 2.7 and matplotlib. The software is available under the 'GNU General Public License version 2' as part of the ngs-bits project: https://github.com/imgag/ngs-bits. christopher.schroeder@med.uni-tuebingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Toward automated assessment of health Web page quality using the DISCERN instrument.

    PubMed

    Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael

    2017-05-01

    As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers. The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ . © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION

    EPA Science Inventory

    Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...

  11. Daily Management System of the Henry Ford Production System: QTIPS to Focus Continuous Improvements at the Level of the Work.

    PubMed

    Zarbo, Richard J; Varney, Ruan C; Copeland, Jacqueline R; D'Angelo, Rita; Sharma, Gaurav

    2015-07-01

    To support our Lean culture of continuous improvement, we implemented a daily management system designed so critical metrics of operational success were the focus of local teams to drive improvements. We innovated a standardized visual daily management board composed of metric categories of Quality, Time, Inventory, Productivity, and Safety (QTIPS); frequency trending; root cause analysis; corrective/preventive actions; and resulting process improvements. In 1 year (June 2013 to July 2014), eight laboratory sections at Henry Ford Hospital employed 64 unique daily metrics. Most assessed long-term (>6 months), monitored process stability, while short-term metrics (1-6 months) were retired after successful targeted problem resolution. Daily monitoring resulted in 42 process improvements. Daily management is the key business accountability subsystem that enabled our culture of continuous improvement to function more efficiently at the managerial level in a visible manner by reviewing and acting based on data and root cause analysis. Copyright© by the American Society for Clinical Pathology.

  12. VHA mental health information system: applying health information technology to monitor and facilitate implementation of VHA Uniform Mental Health Services Handbook requirements.

    PubMed

    Trafton, Jodie A; Greenberg, Greg; Harris, Alex H S; Tavakoli, Sara; Kearney, Lisa; McCarthy, John; Blow, Fredric; Hoff, Rani; Schohn, Mary

    2013-03-01

    To describe the design and deployment of health information technology to support implementation of mental health services policy requirements in the Veterans Health Administration (VHA). Using administrative and self-report survey data, we developed and fielded metrics regarding implementation of the requirements delineated in the VHA Uniform Mental Health Services Handbook. Finalized metrics were incorporated into 2 external facilitation-based quality improvement programs led by the VHA Mental Health Operations. To support these programs, tailored site-specific reports were generated. Metric development required close collaboration between program evaluators, policy makers and clinical leadership, and consideration of policy language and intent. Electronic reports supporting different purposes required distinct formatting and presentation features, despite their having similar general goals and using the same metrics. Health information technology can facilitate mental health policy implementation but must be integrated into a process of consensus building and close collaboration with policy makers, evaluators, and practitioners.

  13. DOE JGI Quality Metrics; Approaches to Scaling and Improving Metagenome Assembly (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copeland, Alex; Brown, C. Titus

    2011-10-13

    DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  14. 77 FR 28901 - Amended Certification Regarding Eligibility To Apply for Worker Adjustment Assistance; Lexis...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... Elsevier, Quality & Metrics Department, Including Employees Located Throughout the United States Who Report to Miamisburg, OH; Lexis Nexis, a Subsidiary of Reed Elsevier, Quality & Metrics Department... Elsevier. The amended notice applicable to TA-W-80,205 and TA-W-80205A is hereby issued as follows: All...

  15. DOE JGI Quality Metrics; Approaches to Scaling and Improving Metagenome Assembly (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Copeland, Alex; Brown, C. Titus

    2018-04-27

    DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  16. SU-F-T-231: Improving the Efficiency of a Radiotherapy Peer-Review System for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, S; Basavatia, A; Garg, M

    Purpose: To improve the efficiency of a radiotherapy peer-review system using a commercially available software application for plan quality evaluation and documentation. Methods: A commercial application, FullAccess (Radialogica LLC, Version 1.4.4), was implemented in a Citrix platform for peer-review process and patient documentation. This application can display images, isodose lines, and dose-volume histograms and create plan reports for peer-review process. Dose metrics in the report can also be benchmarked for plan quality evaluation. Site-specific templates were generated based on departmental treatment planning policies and procedures for each disease site, which generally follow RTOG protocols as well as published prospective clinicalmore » trial data, including both conventional fractionation and hypo-fractionation schema. Once a plan is ready for review, the planner exports the plan to FullAccess, applies the site-specific template, and presents the report for plan review. The plan is still reviewed in the treatment planning system, as that is the legal record. Upon physician’s approval of a plan, the plan is packaged for peer review with the plan report and dose metrics are saved to the database. Results: The reports show dose metrics of PTVs and critical organs for the plans and also indicate whether or not the metrics are within tolerance. Graphical results with green, yellow, and red lights are displayed of whether planning objectives have been met. In addition, benchmarking statistics are collected to see where the current plan falls compared to all historical plans on each metric. All physicians in peer review can easily verify constraints by these reports. Conclusion: We have demonstrated the improvement in a radiotherapy peer-review system, which allows physicians to easily verify planning constraints for different disease sites and fractionation schema, allows for standardization in the clinic to ensure that departmental policies are maintained, and builds a comprehensive database for potential clinical outcome evaluation.« less

  17. Steganalysis for Audio Data

    DTIC Science & Technology

    2006-03-31

    from existing image steganography and steganalysis techniques, the overall objective of Task (b) is to design and implement audio steganography in...general design of the VoIP steganography algorithm is based on known LSB hiding techniques (used for example in StegHide (http...system. Nasir Memon et. al. described a steganalyzer based on image quality metrics [AMS03]. Basically, the main idea to detect steganography by

  18. Setting quality and safety priorities in a target-rich environment: an academic medical center's challenge.

    PubMed

    Mort, Elizabeth A; Demehin, Akinluwa A; Marple, Keith B; McCullough, Kathryn Y; Meyer, Gregg S

    2013-08-01

    Hospitals are continually challenged to provide safer and higher-quality patient care despite resource constraints. With an ever-increasing range of quality and safety targets at the national, state, and local levels, prioritization is crucial in effective institutional quality goal setting and resource allocation.Organizational goal-setting theory is a performance improvement methodology with strong results across many industries. The authors describe a structured goal-setting process they have established at Massachusetts General Hospital for setting annual institutional quality and safety goals. Begun in 2008, this process has been conducted on an annual basis. Quality and safety data are gathered from many sources, both internal and external to the hospital. These data are collated and classified, and multiple approaches are used to identify the most pressing quality issues facing the institution. The conclusions are subject to stringent internal review, and then the top quality goals of the institution are chosen. Specific tactical initiatives and executive owners are assigned to each goal, and metrics are selected to track performance. A reporting tool based on these tactics and metrics is used to deliver progress updates to senior hospital leadership.The hospital has experienced excellent results and strong organizational buy-in using this effective, low-cost, and replicable goal-setting process. It has led to improvements in structural, process, and outcomes aspects of quality.

  19. Indicators and metrics for the assessment of climate engineering

    NASA Astrophysics Data System (ADS)

    Oschlies, A.; Held, H.; Keller, D.; Keller, K.; Mengis, N.; Quaas, M.; Rickels, W.; Schmidt, H.

    2017-01-01

    Selecting appropriate indicators is essential to aggregate the information provided by climate model outputs into a manageable set of relevant metrics on which assessments of climate engineering (CE) can be based. From all the variables potentially available from climate models, indicators need to be selected that are able to inform scientists and society on the development of the Earth system under CE, as well as on possible impacts and side effects of various ways of deploying CE or not. However, the indicators used so far have been largely identical to those used in climate change assessments and do not visibly reflect the fact that indicators for assessing CE (and thus the metrics composed of these indicators) may be different from those used to assess global warming. Until now, there has been little dedicated effort to identifying specific indicators and metrics for assessing CE. We here propose that such an effort should be facilitated by a more decision-oriented approach and an iterative procedure in close interaction between academia, decision makers, and stakeholders. Specifically, synergies and trade-offs between social objectives reflected by individual indicators, as well as decision-relevant uncertainties should be considered in the development of metrics, so that society can take informed decisions about climate policy measures under the impression of the options available, their likely effects and side effects, and the quality of the underlying knowledge base.

  20. Tools for monitoring system suitability in LC MS/MS centric proteomic experiments.

    PubMed

    Bereman, Michael S

    2015-03-01

    With advances in liquid chromatography coupled to tandem mass spectrometry technologies combined with the continued goals of biomarker discovery, clinical applications of established biomarkers, and integrating large multiomic datasets (i.e. "big data"), there remains an urgent need for robust tools to assess instrument performance (i.e. system suitability) in proteomic workflows. To this end, several freely available tools have been introduced that monitor a number of peptide identification (ID) and/or peptide ID free metrics. Peptide ID metrics include numbers of proteins, peptides, or peptide spectral matches identified from a complex mixture. Peptide ID free metrics include retention time reproducibility, full width half maximum, ion injection times, and integrated peptide intensities. The main driving force in the development of these tools is to monitor both intra- and interexperiment performance variability and to identify sources of variation. The purpose of this review is to summarize and evaluate these tools based on versatility, automation, vendor neutrality, metrics monitored, and visualization capabilities. In addition, the implementation of a robust system suitability workflow is discussed in terms of metrics, type of standard, and frequency of evaluation along with the obstacles to overcome prior to incorporating a more proactive approach to overall quality control in liquid chromatography coupled to tandem mass spectrometry based proteomic workflows. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Can Technology Improve the Quality of Colonoscopy?

    PubMed

    Thirumurthi, Selvi; Ross, William A; Raju, Gottumukkala S

    2016-07-01

    In order for screening colonoscopy to be an effective tool in reducing colon cancer incidence, exams must be performed in a high-quality manner. Quality metrics have been presented by gastroenterology societies and now include higher adenoma detection rate targets than in the past. In many cases, the quality of colonoscopy can often be improved with simple low-cost interventions such as improved procedure technique, implementing split-dose bowel prep, and monitoring individuals' performances. Emerging technology has expanded our field of view and image quality during colonoscopy. We will critically review several technological advances in the context of quality metrics and discuss if technology can really improve the quality of colonoscopy.

  2. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system.

    PubMed

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R

    2013-07-01

    The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.

  3. Agile Software Development in Defense Acquisition: A Mission Assurance Perspective

    DTIC Science & Technology

    2012-03-23

    based information retrieval system, we might say that this program works like a hive of bees , going out for pollen and bringing it back to the hive...developers ® Six Siqma is reqistered in the U. S. Patent and Trademark Office by Motorola ^_ 33 @ AEROSPACE Major Areas in a Typical Software...requirements - Capturing and evaluating quality metrics, identifying common problem areas **» Despite its positive impact on quality, pair programming

  4. Supporting the analysis of ontology evolution processes through the combination of static and dynamic scaling functions in OQuaRE.

    PubMed

    Duque-Ramos, Astrid; Quesada-Martínez, Manuel; Iniesta-Moreno, Miguela; Fernández-Breis, Jesualdo Tomás; Stevens, Robert

    2016-10-17

    The biomedical community has now developed a significant number of ontologies. The curation of biomedical ontologies is a complex task and biomedical ontologies evolve rapidly, so new versions are regularly and frequently published in ontology repositories. This has the implication of there being a high number of ontology versions over a short time span. Given this level of activity, ontology designers need to be supported in the effective management of the evolution of biomedical ontologies as the different changes may affect the engineering and quality of the ontology. This is why there is a need for methods that contribute to the analysis of the effects of changes and evolution of ontologies. In this paper we approach this issue from the ontology quality perspective. In previous work we have developed an ontology evaluation framework based on quantitative metrics, called OQuaRE. Here, OQuaRE is used as a core component in a method that enables the analysis of the different versions of biomedical ontologies using the quality dimensions included in OQuaRE. Moreover, we describe and use two scales for evaluating the changes between the versions of a given ontology. The first one is the static scale used in OQuaRE and the second one is a new, dynamic scale, based on the observed values of the quality metrics of a corpus defined by all the versions of a given ontology (life-cycle). In this work we explain how OQuaRE can be adapted for understanding the evolution of ontologies. Its use has been illustrated with the ontology of bioinformatics operations, types of data, formats, and topics (EDAM). The two scales included in OQuaRE provide complementary information about the evolution of the ontologies. The application of the static scale, which is the original OQuaRE scale, to the versions of the EDAM ontology reveals a design based on good ontological engineering principles. The application of the dynamic scale has enabled a more detailed analysis of the evolution of the ontology, measured through differences between versions. The statistics of change based on the OQuaRE quality scores make possible to identify key versions where some changes in the engineering of the ontology triggered a change from the OQuaRE quality perspective. In the case of the EDAM, this study let us to identify that the fifth version of the ontology has the largest impact in the quality metrics of the ontology, when comparative analyses between the pairs of consecutive versions are performed.

  5. Interpolation of Water Quality Along Stream Networks from Synoptic Data

    NASA Astrophysics Data System (ADS)

    Lyon, S. W.; Seibert, J.; Lembo, A. J.; Walter, M. T.; Gburek, W. J.; Thongs, D.; Schneiderman, E.; Steenhuis, T. S.

    2005-12-01

    Effective catchment management requires water quality monitoring that identifies major pollutant sources and transport and transformation processes. While traditional monitoring schemes involve regular sampling at fixed locations in the stream, there is an interest synoptic or `snapshot' sampling to quantify water quality throughout a catchment. This type of sampling enables insights to biogeochemical behavior throughout a stream network at low flow conditions. Since baseflow concentrations are temporally persistence, they are indicative of the health of the ecosystems. A major problem with snapshot sampling is the lack of analytical techniques to represent the spatially distributed data in a manner that is 1) easily understood, 2) representative of the stream network, and 3) capable of being used to develop land management scenarios. This study presents a kriging application using the landscape composition of the contributing area along a stream network to define a new distance metric. This allows for locations that are more `similar' to stay spatially close together while less similar locations `move' further apart. We analyze a snapshot sampling campaign consisting of 125 manually collected grab samples during a summer recession flow period in the Townbrook Research Watershed. The watershed is located in the Catskill region of New York State and represents the mixed forest-agriculture land uses of the region. Our initial analysis indicated that stream nutrients (nitrogen and phosphorus) and chemical (major cations and anions) concentrations are controlled by the composition of landscape characteristics (landuse classes and soil types) surrounding the stream. Based on these relationships, an intuitively defined distance metric is developed by combining the traditional distance between observations and the relative difference in composition of contributing area. This metric is used to interpolate between the sampling locations with traditional geostatistic techniques (semivariograms and ordinary kriging). The resulting interpolations provide continuous stream nutrient and chemical concentrations with reduced kriging RMSE (i.e., the interpolation fits the actual data better) performed without path restriction to the stream channel (i.e., the current default for most geostatistical packages) or performed with an in-channel, Euclidean distance metric (i.e., `as the fish swims' distance). In addition to being quantifiably better, the new metric also produces maps of stream concentrations that match expected continuous stream concentrations based on expert knowledge of the watershed. This analysis and its resulting stream concentration maps provide a representation of spatially distributed synoptic data that can be used to quantify water quality for more effective catchment management that focuses on pollutant sources and transport and transformation processes.

  6. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  7. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    PubMed

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Relevance of motion-related assessment metrics in laparoscopic surgery.

    PubMed

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  9. Toward an ozone standard to protect vegetation based on effective dose: A review of deposition resistances and a possible metric

    Treesearch

    W. J. Massman

    2004-01-01

    Present air quality standards to protect vegetation from ozone are based on measured concentrations (i.e., exposure) rather than on plant uptake rates (or dose). Some familiar cumulative exposure-based indices include SUM06, AOT40, and W126. However, plant injury is more closely related to dose, or more appropriately to effective dose, than to exposure. This study...

  10. Algorithm-enabled exploration of image-quality potential of cone-beam CT in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan

    2015-06-01

    Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.

  11. Photochemical model evaluation of the ground-level ozone impacts on ambient air quality and vegetation health in the Alberta oil sands region: Using present and future emission scenarios

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, Krish; Cho, Sunny; Morris, Ralph; Spink, David; Jung, Jaegun; Pauls, Ron; Duffett, Katherine

    2016-09-01

    One of the potential environmental issues associated with oil sands development is increased ozone formation resulting from NOX and volatile organic compound emissions from bitumen extraction, processing and upgrading. To manage this issue in the Athabasca Oil Sands Region (AOSR) in northeast Alberta, a regional multi-stakeholder group, the Cumulative Environmental Management Association (CEMA), developed an Ozone Management Framework that includes a modelling based assessment component. In this paper, we describe how the Community Multi-scale Air Quality (CMAQ) model was applied to assess potential ground-level ozone formation and impacts on ambient air quality and vegetation health for three different ozone precursor cases in the AOSR. Statistical analysis methods were applied, and the CMAQ performance results met the U.S. EPA model performance goal at all sites. The modelled 4th highest daily maximum 8-h average ozone concentrations in the base and two future year scenarios did not exceed the Canada-wide standard of 65 ppb or the newer Canadian Ambient Air Quality Standards of 63 ppb in 2015 and 62 ppb in 2020. Modelled maximum 1-h ozone concentrations in the study were well below the Alberta Ambient Air Quality Objective of 82 ppb in all three cases. Several ozone vegetation exposure metrics were also evaluated to investigate the potential impact of ground-level ozone on vegetation. The chronic 3-months SUM60 exposure metric is within the CEMA baseline range (0-2000 ppb-hr) everywhere in the AOSR. The AOT40 ozone exposure metric predicted by CMAQ did not exceed the United Nations Economic Commission for Europe (UN/ECE) threshold of concern of 3000 ppb-hr in any of the cases but is just below the threshold in high-end future emissions scenario. In all three emission scenarios, the CMAQ predicted W126 ozone exposure metric is within the CEMA baseline threshold of 4000 ppb-hr. This study outlines the use of photochemical modelling of the impact of an industry (oil sands) on ground-level ozone levels as an air quality management tool in the AOSR. It allows an evaluation of the relationships between the pollutants emitted to the atmosphere and potential ground level ozone concentrations throughout the AOSR thereby extending the spatial coverage of the results beyond the monitoring network and also allowing an assessment of the potential impacts of possible future emission cases.

  12. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction.

    PubMed

    Stassi, D; Dutta, S; Ma, H; Soderman, A; Pazzani, D; Gros, E; Okerlund, D; Schmidt, T G

    2016-01-01

    Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.

  13. Metric-driven harm: an exploration of unintended consequences of performance measurement.

    PubMed

    Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck

    2013-11-01

    Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.

  14. Variability in Lotic Communities in Three Contrasting Stream Environments in the Santa Ana River Basin, California, 1999-2001

    USGS Publications Warehouse

    Burton, Carmen A.

    2008-01-01

    Biotic communities and environmental conditions can be highly variable between natural ecosystems. The variability of natural assemblages should be considered in the interpretation of any ecological study when samples are either spatially or temporally distributed. Little is known about biotic variability in the Santa Ana River Basin. In this report, the lotic community and habitat assessment data from ecological studies done as part of the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) program are used for a preliminary assessment of variability in the Santa Ana Basin. Habitat was assessed, and benthic algae, benthic macroinvertebrate, and fish samples were collected at four sites during 1999-2001. Three of these sites were sampled all three years. One of these sites is located in the San Bernardino Mountains, and the other two sites are located in the alluvial basin. Analysis of variance determined that the three sites with multiyear data were significantly different for 41 benthic algae metrics and 65 macroinvertebrate metrics and fish communities. Coefficients of variation (CVs) were calculated for the habitat measurements, metrics of benthic algae, and macroinvertebrate data as measures of variability. Annual variability of habitat data was generally greater at the mountain site than at the basin sites. The mountain site had higher CVs for water temperature, depth, velocity, canopy angle, streambed substrate, and most water-quality variables. In general, CVs of most benthic algae metrics calculated from the richest-targeted habitat (RTH) samples were greater at the mountain site. In contrast, CVs of most benthic algae metrics calculated from depositional-targeted habitat (DTH) samples were lower at the mountain site. In general, CVs of macroinvertebrate metrics calculated from qualitative multihabitat (QMH) samples were lower at the mountain site. In contrast, CVs of many metrics calculated from RTH samples were greater at the mountain site than at one of the basin sites. Fish communities were more variable at the basin sites because more species were present at these sites. Annual variability of benthic algae metrics was related to annual variability in habitat variables. The CVs of benthic algae metrics related to the most CVs of habitat variables included QMH taxon richness, the RTH percentage richness, RTH abundance of tolerant taxa, RTH percentage richness of halophilic diatoms, RTH percentage abundance of sestonic diatoms, DTH percentage richness of nitrogen heterotrophic diatoms, and DTH pollution tolerance index. The CVs of macroinvertebrate metrics related to the most CVs of habitat variables included the RTH trichoptera, RTH EPT, RTH scraper richness, RTH nonchironomid dipteran abundance (in percent), and RTH EPA (U.S. Environmental Protection Agency) tolerance, which is based on abundance. Many of the CVs of habitat variables related to CVs of macroinvertebrate metrics were the same habitat variables that were related to the CVs of benthic algae metrics. On the basis of these results, annual variability may have a role in the relationship of benthic algae and macroinvertebrates assemblages with habitat and water quality in the Santa Ana Basin. This report provides valuable baseline data on the variability of biological communities in the Santa Ana Basin.

  15. A COMPARISON OF VECTOR AND RASTER GIS METHODS FOR CALCULATING LANDSCAPE METRICS USED IN ENVIRONMENTAL ASSESSMENTS

    EPA Science Inventory

    GIS-based measurements that combine native raster and native vector data are commonly used to assess environmental quality. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used beca...

  16. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  17. Quality of care received and patient-reported regret in prostate cancer: Analysis of a population-based prospective cohort.

    PubMed

    Holmes, Jordan A; Bensen, Jeannette T; Mohler, James L; Song, Lixin; Mishel, Merle H; Chen, Ronald C

    2017-01-01

    Meeting quality of care standards in oncology is recognized as important by physicians, professional organizations, and payers. Data from a population-based cohort of patients with prostate cancer were used to examine whether receipt of care was consistent with published consensus metrics and whether receiving high-quality care was associated with less patient-reported treatment decisional regret. Patients with incident prostate cancer were enrolled in collaboration with the North Carolina Central Cancer Registry, with an oversampling of minority patients. Medical record abstraction was used to determine whether participants received high-quality care based on 5 standards: 1) discussion of all treatment options; 2) complete workup (prostate-specific antigen, Gleason grade, and clinical stage); 3) low-risk participants did not undergo a bone scan; 4) high-risk participants treated with radiotherapy (RT) received androgen deprivation therapy; and 5) participants treated with RT received conformal or intensity-modulated RT. Treatment decisional regret was assessed using a validated instrument. A total of 804 participants were analyzed. Overall, 66% of African American and 73% of white participants received care that met all standards (P = .03); this racial difference was confirmed by multivariable analysis. Care that included "discussion of all treatment options" was found to be associated with less patient-reported regret on univariable analysis (P = .03) and multivariable analysis (odds ratio, 0.59; 95% confidence interval, 0.37-0.95). The majority of participants received high-quality care, but racial disparity existed. Participants who discussed all treatment options appeared to have less treatment decisional regret. To the authors' knowledge, this is the first study to demonstrate an association between a quality of care metric and patient-reported outcome. Cancer 2017;138-143. © 2016 American Cancer Society. © 2016 American Cancer Society.

  18. SU-C-BRA-03: An Automated and Quick Contour Errordetection for Auto Segmentation in Online Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Ates, O; Li, X

    Purpose: To develop a tool that can quickly and automatically assess contour quality generated from auto segmentation during online adaptive replanning. Methods: Due to the strict time requirement of online replanning and lack of ‘ground truth’ contours in daily images, our method starts with assessing image registration accuracy focusing on the surface of the organ in question. Several metrics tightly related to registration accuracy including Jacobian maps, contours shell deformation, and voxel-based root mean square (RMS) analysis were computed. To identify correct contours, additional metrics and an adaptive decision tree are introduced. To approve in principle, tests were performed withmore » CT sets, planned and daily CTs acquired using a CT-on-rails during routine CT-guided RT delivery for 20 prostate cancer patients. The contours generated on daily CTs using an auto-segmentation tool (ADMIRE, Elekta, MIM) based on deformable image registration of the planning CT and daily CT were tested. Results: The deformed contours of 20 patients with total of 60 structures were manually checked as baselines. The incorrect rate of total contours is 49%. To evaluate the quality of local deformation, the Jacobian determinant (1.047±0.045) on contours has been analyzed. In an analysis of rectum contour shell deformed, the higher rate (0.41) of error contours detection was obtained compared to 0.32 with manual check. All automated detections took less than 5 seconds. Conclusion: The proposed method can effectively detect contour errors in micro and macro scope by evaluating multiple deformable registration metrics in a parallel computing process. Future work will focus on improving practicability and optimizing calculation algorithms and metric selection.« less

  19. Air pollution exposure prediction approaches used in air pollution epidemiology studies.

    PubMed

    Özkaynak, Halûk; Baxter, Lisa K; Dionisio, Kathie L; Burke, Janet

    2013-01-01

    Epidemiological studies of the health effects of outdoor air pollution have traditionally relied upon surrogates of personal exposures, most commonly ambient concentration measurements from central-site monitors. However, this approach may introduce exposure prediction errors and misclassification of exposures for pollutants that are spatially heterogeneous, such as those associated with traffic emissions (e.g., carbon monoxide, elemental carbon, nitrogen oxides, and particulate matter). We review alternative air quality and human exposure metrics applied in recent air pollution health effect studies discussed during the International Society of Exposure Science 2011 conference in Baltimore, MD. Symposium presenters considered various alternative exposure metrics, including: central site or interpolated monitoring data, regional pollution levels predicted using the national scale Community Multiscale Air Quality model or from measurements combined with local-scale (AERMOD) air quality models, hybrid models that include satellite data, statistically blended modeling and measurement data, concentrations adjusted by home infiltration rates, and population-based human exposure model (Stochastic Human Exposure and Dose Simulation, and Air Pollutants Exposure models) predictions. These alternative exposure metrics were applied in epidemiological applications to health outcomes, including daily mortality and respiratory hospital admissions, daily hospital emergency department visits, daily myocardial infarctions, and daily adverse birth outcomes. This paper summarizes the research projects presented during the symposium, with full details of the work presented in individual papers in this journal issue.

  20. Analytical performance evaluation of a high-volume hematology laboratory utilizing sigma metrics as standard of excellence.

    PubMed

    Shaikh, M S; Moiz, B

    2016-04-01

    Around two-thirds of important clinical decisions about the management of patients are based on laboratory test results. Clinical laboratories are required to adopt quality control (QC) measures to ensure provision of accurate and precise results. Six sigma is a statistical tool, which provides opportunity to assess performance at the highest level of excellence. The purpose of this study was to assess performance of our hematological parameters on sigma scale in order to identify gaps and hence areas of improvement in patient care. Twelve analytes included in the study were hemoglobin (Hb), hematocrit (Hct), red blood cell count (RBC), mean corpuscular volume (MCV), red cell distribution width (RDW), total leukocyte count (TLC) with percentages of neutrophils (Neutr%) and lymphocytes (Lymph %), platelet count (Plt), mean platelet volume (MPV), prothrombin time (PT), and fibrinogen (Fbg). Internal quality control data and external quality assurance survey results were utilized for the calculation of sigma metrics for each analyte. Acceptable sigma value of ≥3 was obtained for the majority of the analytes included in the analysis. MCV, Plt, and Fbg achieved value of <3 for level 1 (low abnormal) control. PT performed poorly on both level 1 and 2 controls with sigma value of <3. Despite acceptable conventional QC tools, application of sigma metrics can identify analytical deficits and hence prospects for the improvement in clinical laboratories. © 2016 John Wiley & Sons Ltd.

  1. Methods of Measurement the Quality Metrics in a Printing System

    NASA Astrophysics Data System (ADS)

    Varepo, L. G.; Brazhnikov, A. Yu; Nagornova, I. V.; Novoselskaya, O. A.

    2018-04-01

    One of the main criteria for choosing ink as a component of printing system is scumming ability of the ink. The realization of algorithm for estimating the quality metrics in a printing system is shown. The histograms of ink rate of various printing systems are presented. A quantitative estimation of stability of offset inks emulsifiability is given.

  2. Comparison of Online Survey Recruitment Platforms for Hard-to-Reach Pregnant Smoking Populations: Feasibility Study.

    PubMed

    Ibarra, Jose Luis; Agas, Jessica Marie; Lee, Melissa; Pan, Julia Lily; Buttenheim, Alison Meredith

    2018-04-16

    Recruiting hard-to-reach populations for health research is challenging. Web-based platforms offer one way to recruit specific samples for research purposes, but little is known about the feasibility of online recruitment and the representativeness and comparability of samples recruited through different Web-based platforms. The objectives of this study were to determine the feasibility of recruiting a hard-to-reach population (pregnant smokers) using 4 different Web-based platforms and to compare participants recruited through each platform. A screener and survey were distributed online through Qualtrics Panel, Soapbox Sample, Reddit, and Amazon Mechanical Turk (mTurk). Descriptive statistics were used to summarize results of each recruitment platform, including eligibility yield, quality yield, income, race, age, and gestational age. Of the 3847 participants screened for eligibility across all 4 Web-based platforms, 535 were eligible and 308 completed the survey. Amazon mTurk yielded the fewest completed responses (n=9), 100% (9/9) of which passed several quality metrics verifying pregnancy and smoking status. Qualtrics Panel yielded 14 completed responses, 86% (12/14) of which passed the quality screening. Soapbox Sample produced 107 completed surveys, 67% (72/107) of which were found to be quality responses. Advertising through Reddit produced the highest completion rate (n=178), but only 29.2% (52/178) of those surveys passed the quality metrics. We found significant differences in eligibility yield, quality yield, age, number of previous pregnancies, age of smoking initiation, current smokers, race, education, and income (P<.001). Although each platform successfully recruited pregnant smokers, results varied in quality, cost, and percentage of complete responses. Moving forward, investigators should pay careful attention to the percentage yield and cost of online recruitment platforms to maximize internal and external validity. ©Jose Luis Ibarra, Jessica Marie Agas, Melissa Lee, Julia Lily Pan, Alison Meredith Buttenheim. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 16.04.2018.

  3. Pragmatic quality metrics for evolutionary software development models

    NASA Technical Reports Server (NTRS)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  4. Averaged ratio between complementary profiles for evaluating shape distortions of map projections and spherical hierarchical tessellations

    NASA Astrophysics Data System (ADS)

    Yan, Jin; Song, Xiao; Gong, Guanghong

    2016-02-01

    We describe a metric named averaged ratio between complementary profiles to represent the distortion of map projections, and the shape regularity of spherical cells derived from map projections or non-map-projection methods. The properties and statistical characteristics of our metric are investigated. Our metric (1) is a variable of numerical equivalence to both scale component and angular deformation component of Tissot indicatrix, and avoids the invalidation when using Tissot indicatrix and derived differential calculus for evaluating non-map-projection based tessellations where mathematical formulae do not exist (e.g., direct spherical subdivisions), (2) exhibits simplicity (neither differential nor integral calculus) and uniformity in the form of calculations, (3) requires low computational cost, while maintaining high correlation with the results of differential calculus, (4) is a quasi-invariant under rotations, and (5) reflects the distortions of map projections, distortion of spherical cells, and the associated distortions of texels. As an indicator of quantitative evaluation, we investigated typical spherical tessellation methods, some variants of tessellation methods, and map projections. The tessellation methods we evaluated are based on map projections or direct spherical subdivisions. The evaluation involves commonly used Platonic polyhedrons, Catalan polyhedrons, etc. Quantitative analyses based on our metric of shape regularity and an essential metric of area uniformity implied that (1) Uniform Spherical Grids and its variant show good qualities in both area uniformity and shape regularity, and (2) Crusta, Unicube map, and a variant of Unicube map exhibit fairly acceptable degrees of area uniformity and shape regularity.

  5. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    PubMed

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  6. Organ quality metrics are a poor predictor of costs and resource utilization in deceased donor kidney transplantation.

    PubMed

    Stahl, Christopher C; Wima, Koffi; Hanseman, Dennis J; Hoehn, Richard S; Ertel, Audrey; Midura, Emily F; Hohmann, Samuel F; Paquette, Ian M; Shah, Shimul A; Abbott, Daniel E

    2015-12-01

    The desire to provide cost-effective care has lead to an investigation of the costs of therapy for end-stage renal disease. Organ quality metrics are one way to attempt to stratify kidney transplants, although the ability of these metrics to predict costs and resource use is undetermined. The Scientific Registry of Transplant Recipients database was linked to the University HealthSystem Consortium Database to identify adult deceased donor kidney transplant recipients from 2009 to 2012. Patients were divided into cohorts by kidney criteria (standard vs expanded) or kidney donor profile index (KDPI) score (<85 vs 85+). Length of stay, 30-day readmission, discharge disposition, and delayed graft function were used as indicators of resource use. Cost was defined as reimbursement based on Medicare cost/charge ratios and included the costs of readmission when applicable. More than 19,500 patients populated the final dataset. Lower-quality kidneys (expanded criteria donor or KDPI 85+) were more likely to be transplanted in older (both P < .001) and diabetic recipients (both P < .001). After multivariable analysis controlling for recipient characteristics, we found that expanded criteria donor transplants were not associated with increased costs compared with standard criteria donor transplants (risk ratio [RR] 0.97, 95% confidence interval [CI] 0.93-1.00, P = .07). KDPI 85+ was associated with slightly lower costs than KDPI <85 transplants (RR 0.95, 95% CI 0.91-0.99, P = .02). When KDPI was considered as a continuous variable, the association was maintained (RR 0.9993, 95% CI 0.999-0.9998, P = .01). Organ quality metrics are less influential predictors of short-term costs than recipient factors. Future studies should focus on recipient characteristics as a way to discern high versus low cost transplantation procedures. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. The Nutrient Balance Concept: A New Quality Metric for Composite Meals and Diets

    PubMed Central

    Fern, Edward B; Watzke, Heribert; Barclay, Denis V.; Roulin, Anne; Drewnowski, Adam

    2015-01-01

    Background Combinations of foods that provide suitable levels of nutrients and energy are required for optimum health. Currently, however, it is difficult to define numerically what are ‘suitable levels’. Objective To develop new metrics based on energy considerations—the Nutrient Balance Concept (NBC)—for assessing overall nutrition quality when combining foods and meals. Method The NBC was developed using the USDA Food Composition Database (Release 27) and illustrated with their MyPlate 7-day sample menus for a 2000 calorie food pattern. The NBC concept is centered on three specific metrics for a given food, meal or diet—a Qualifying Index (QI), a Disqualifying Index (DI) and a Nutrient Balance (NB). The QI and DI were determined, respectively, from the content of 27 essential nutrients and 6 nutrients associated with negative health outcomes. The third metric, the Nutrient Balance (NB), was derived from the Qualifying Index (QI) and provided key information on the relative content of qualifying nutrients in the food. Because the Qualifying and Disqualifying Indices (QI and DI) were standardized to energy content, both become constants for a given food/meal/diet and a particular consumer age group, making it possible to develop algorithms for predicting nutrition quality when combining different foods. Results Combining different foods into composite meals and daily diets led to improved nutrition quality as seen by QI values closer to unity (indicating nutrient density was better equilibrated with energy density), DI values below 1.0 (denoting an acceptable level of consumption of disqualifying nutrients) and increased NB values (signifying complementarity of foods and better provision of qualifying nutrients). Conclusion The Nutrient Balance Concept (NBC) represents a new approach to nutrient profiling and the first step in the progression from the nutrient evaluation of individual foods to that of multiple foods in the context of meals and total diets. PMID:26176770

  8. A Metric and Workflow for Quality Control in the Analysis of Heterogeneity in Phenotypic Profiles and Screens

    PubMed Central

    Gough, Albert; Shun, Tongying; Taylor, D. Lansing; Schurdak, Mark

    2016-01-01

    Heterogeneity is well recognized as a common property of cellular systems that impacts biomedical research and the development of therapeutics and diagnostics. Several studies have shown that analysis of heterogeneity: gives insight into mechanisms of action of perturbagens; can be used to predict optimal combination therapies; and to quantify heterogeneity in tumors where heterogeneity is believed to be associated with adaptation and resistance. Cytometry methods including high content screening (HCS), high throughput microscopy, flow cytometry, mass spec imaging and digital pathology capture cell level data for populations of cells. However it is often assumed that the population response is normally distributed and therefore that the average adequately describes the results. A deeper understanding of the results of the measurements and more effective comparison of perturbagen effects requires analysis that takes into account the distribution of the measurements, i.e. the heterogeneity. However, the reproducibility of heterogeneous data collected on different days, and in different plates/slides has not previously been evaluated. Here we show that conventional assay quality metrics alone are not adequate for quality control of the heterogeneity in the data. To address this need, we demonstrate the use of the Kolmogorov-Smirnov statistic as a metric for monitoring the reproducibility of heterogeneity in an SAR screen, describe a workflow for quality control in heterogeneity analysis. One major challenge in high throughput biology is the evaluation and interpretation of heterogeneity in thousands of samples, such as compounds in a cell-based screen. In this study we also demonstrate that three heterogeneity indices previously reported, capture the shapes of the distributions and provide a means to filter and browse big data sets of cellular distributions in order to compare and identify distributions of interest. These metrics and methods are presented as a workflow for analysis of heterogeneity in large scale biology projects. PMID:26476369

  9. The StreamCat Dataset: Accumulated Attributes for NHDPlusV2 Catchments (Version 2.1) for the Conterminous United States: Base Flow Index

    EPA Pesticide Factsheets

    This dataset represents the base flow index values within individual, local NHDPlusV2 catchments and upstream, contributing watersheds. Attributes of the landscape layer were calculated for every local NHDPlusV2 catchment and accumulated to provide watershed-level metrics. (See Supplementary Info for Glossary of Terms) The base-flow index (BFI) grid for the conterminous United States was developed to estimate (1) BFI values for ungaged streams, and (2) ground-water recharge throughout the conterminous United States (see Source_Information). Estimates of BFI values at ungaged streams and BFI-based ground-water recharge estimates are useful for interpreting relations between land use and water quality in surface and ground water. The bfi (%) was summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).

  10. Health impact metrics for air pollution management strategies

    PubMed Central

    Martenies, Sheena E.; Wilkins, Donele; Batterman, Stuart A.

    2015-01-01

    Health impact assessments (HIAs) inform policy and decision making by providing information regarding future health concerns, and quantitative HIAs now are being used for local and urban-scale projects. HIA results can be expressed using a variety of metrics that differ in meaningful ways, and guidance is lacking with respect to best practices for the development and use of HIA metrics. This study reviews HIA metrics pertaining to air quality management and presents evaluative criteria for their selection and use. These are illustrated in a case study where PM2.5 concentrations are lowered from 10 to 8 µg/m3 in an urban area of 1.8 million people. Health impact functions are used to estimate the number of premature deaths, unscheduled hospitalizations and other morbidity outcomes. The most common metric in recent quantitative HIAs has been the number of cases of adverse outcomes avoided. Other metrics include time-based measures, e.g., disability-adjusted life years (DALYs), monetized impacts, functional-unit based measures, e.g., benefits per ton of emissions reduced, and other economic indicators, e.g., cost-benefit ratios. These metrics are evaluated by considering their comprehensiveness, the spatial and temporal resolution of the analysis, how equity considerations are facilitated, and the analysis and presentation of uncertainty. In the case study, the greatest number of avoided cases occurs for low severity morbidity outcomes, e.g., asthma exacerbations (n=28,000) and minor-restricted activity days (n=37,000); while DALYs and monetized impacts are driven by the severity, duration and value assigned to a relatively low number of premature deaths (n=190 to 230 per year). The selection of appropriate metrics depends on the problem context and boundaries, the severity of impacts, and community values regarding health. The number of avoided cases provides an estimate of the number of people affected, and monetized impacts facilitate additional economic analyses useful to policy analysis. DALYs are commonly used as an aggregate measure of health impacts and can be used to compare impacts across studies. Benefits per ton metrics may be appropriate when changes in emissions rates can be estimated. To address community concerns and HIA objectives, a combination of metrics is suggested. PMID:26372694

  11. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    PubMed

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)

  12. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.

  13. Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics.

    PubMed

    Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai

    2013-05-01

    Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.

  14. Using self-organizing maps to develop ambient air quality classifications: a time series example

    PubMed Central

    2014-01-01

    Background Development of exposure metrics that capture features of the multipollutant environment are needed to investigate health effects of pollutant mixtures. This is a complex problem that requires development of new methodologies. Objective Present a self-organizing map (SOM) framework for creating ambient air quality classifications that group days with similar multipollutant profiles. Methods Eight years of day-level data from Atlanta, GA, for ten ambient air pollutants collected at a central monitor location were classified using SOM into a set of day types based on their day-level multipollutant profiles. We present strategies for using SOM to develop a multipollutant metric of air quality and compare results with more traditional techniques. Results Our analysis found that 16 types of days reasonably describe the day-level multipollutant combinations that appear most frequently in our data. Multipollutant day types ranged from conditions when all pollutants measured low to days exhibiting relatively high concentrations for either primary or secondary pollutants or both. The temporal nature of class assignments indicated substantial heterogeneity in day type frequency distributions (~1%-14%), relatively short-term durations (<2 day persistence), and long-term and seasonal trends. Meteorological summaries revealed strong day type weather dependencies and pollutant concentration summaries provided interesting scenarios for further investigation. Comparison with traditional methods found SOM produced similar classifications with added insight regarding between-class relationships. Conclusion We find SOM to be an attractive framework for developing ambient air quality classification because the approach eases interpretation of results by allowing users to visualize classifications on an organized map. The presented approach provides an appealing tool for developing multipollutant metrics of air quality that can be used to support multipollutant health studies. PMID:24990361

  15. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  16. Toward determining melt pool quality metrics via coaxial monitoring in laser powder bed fusion.

    PubMed

    Fisher, Brian A; Lane, Brandon; Yeung, Ho; Beuth, Jack

    2018-01-01

    The current industry trend in metal additive manufacturing is towards greater real time process monitoring capabilities during builds to ensure high quality parts. While the hardware implementations that allow for real time monitoring of the melt pool have advanced significantly, the knowledge required to correlate the generated data to useful metrics of interest are still lacking. This research presents promising results that aim to bridge this knowledge gap by determining a novel means to correlate easily obtainable sensor data (thermal emission) to key melt pool size metrics (e.g., melt pool cross sectional area).

  17. Do Clinical Standards for Diabetes Care Address Excess Risk for Hypoglycemia in Vulnerable Patients? A Systematic Review

    PubMed Central

    Berkowitz, Seth A; Aragon, Katherine; Hines, Jonas; Seligman, Hilary; Lee, Sei; Sarkar, Urmimala

    2013-01-01

    Objective To determine whether diabetes clinical standards consider increased hypoglycemia risk in vulnerable patients. Data Sources MEDLINE, the National Guidelines Clearinghouse, the National Quality Measures Clearinghouse, and supplemental sources. Study Design Systematic review of clinical standards (guidelines, quality metrics, or pay-for-performance programs) for glycemic control in adult diabetes patients. The primary outcome was discussion of increased risk for hypoglycemia in vulnerable populations. Data Collection/Extraction Methods Manuscripts identified were abstracted by two independent reviewers using prespecified inclusion/exclusion criteria and a standardized abstraction form. Principal Findings We screened 1,166 titles, and reviewed 220 manuscripts in full text. Forty-four guidelines, 17 quality metrics, and 8 pay-for-performance programs were included. Five (11 percent) guidelines and no quality metrics or pay-for-performance programs met the primary outcome. Conclusions Clinical standards do not substantively incorporate evidence about increased risk for hypoglycemia in vulnerable populations. PMID:23445498

  18. Do clinical standards for diabetes care address excess risk for hypoglycemia in vulnerable patients? A systematic review.

    PubMed

    Berkowitz, Seth A; Aragon, Katherine; Hines, Jonas; Seligman, Hilary; Lee, Sei; Sarkar, Urmimala

    2013-08-01

    To determine whether diabetes clinical standards consider increased hypoglycemia risk in vulnerable patients. MEDLINE, the National Guidelines Clearinghouse, the National Quality Measures Clearinghouse, and supplemental sources. Systematic review of clinical standards (guidelines, quality metrics, or pay-for-performance programs) for glycemic control in adult diabetes patients. The primary outcome was discussion of increased risk for hypoglycemia in vulnerable populations. Manuscripts identified were abstracted by two independent reviewers using prespecified inclusion/exclusion criteria and a standardized abstraction form. We screened 1,166 titles, and reviewed 220 manuscripts in full text. Forty-four guidelines, 17 quality metrics, and 8 pay-for-performance programs were included. Five (11 percent) guidelines and no quality metrics or pay-for-performance programs met the primary outcome. Clinical standards do not substantively incorporate evidence about increased risk for hypoglycemia in vulnerable populations. © Health Research and Educational Trust.

  19. Quality Assurance Assessment of Diagnostic and Radiation Therapy–Simulation CT Image Registration for Head and Neck Radiation Therapy: Anatomic Region of Interest–based Comparison of Rigid and Deformable Algorithms

    PubMed Central

    Mohamed, Abdallah S. R.; Ruangskul, Manee-Naad; Awan, Musaddiq J.; Baron, Charles A.; Kalpathy-Cramer, Jayashree; Castillo, Richard; Castillo, Edward; Guerrero, Thomas M.; Kocak-Uzel, Esengul; Yang, Jinzhong; Court, Laurence E.; Kantor, Michael E.; Gunn, G. Brandon; Colen, Rivka R.; Frank, Steven J.; Garden, Adam S.; Rosenthal, David I.

    2015-01-01

    Purpose To develop a quality assurance (QA) workflow by using a robust, curated, manually segmented anatomic region-of-interest (ROI) library as a benchmark for quantitative assessment of different image registration techniques used for head and neck radiation therapy–simulation computed tomography (CT) with diagnostic CT coregistration. Materials and Methods Radiation therapy–simulation CT images and diagnostic CT images in 20 patients with head and neck squamous cell carcinoma treated with curative-intent intensity-modulated radiation therapy between August 2011 and May 2012 were retrospectively retrieved with institutional review board approval. Sixty-eight reference anatomic ROIs with gross tumor and nodal targets were then manually contoured on images from each examination. Diagnostic CT images were registered with simulation CT images rigidly and by using four deformable image registration (DIR) algorithms: atlas based, B-spline, demons, and optical flow. The resultant deformed ROIs were compared with manually contoured reference ROIs by using similarity coefficient metrics (ie, Dice similarity coefficient) and surface distance metrics (ie, 95% maximum Hausdorff distance). The nonparametric Steel test with control was used to compare different DIR algorithms with rigid image registration (RIR) by using the post hoc Wilcoxon signed-rank test for stratified metric comparison. Results A total of 2720 anatomic and 50 tumor and nodal ROIs were delineated. All DIR algorithms showed improved performance over RIR for anatomic and target ROI conformance, as shown for most comparison metrics (Steel test, P < .008 after Bonferroni correction). The performance of different algorithms varied substantially with stratification by specific anatomic structures or category and simulation CT section thickness. Conclusion Development of a formal ROI-based QA workflow for registration assessment demonstrated improved performance with DIR techniques over RIR. After QA, DIR implementation should be the standard for head and neck diagnostic CT and simulation CT allineation, especially for target delineation. © RSNA, 2014 Online supplemental material is available for this article. PMID:25380454

  20. SU-C-9A-02: Structured Noise Index as An Automated Quality Control for Nuclear Medicine: A Two Year Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, J; Christianson, O; Samei, E

    Purpose: Flood-field uniformity evaluation is an essential element in the assessment of nuclear medicine (NM) gamma cameras. It serves as the central element of the quality control (QC) program, acquired and analyzed on a daily basis prior to clinical imaging. Uniformity images are traditionally analyzed using pixel value-based metrics which often fail to capture subtle structure and patterns caused by changes in gamma camera performance requiring additional visual inspection which is subjective and time demanding. The goal of this project was to develop and implement a robust QC metrology for NM that is effective in identifying non-uniformity issues, reporting issuesmore » in a timely manner for efficient correction prior to clinical involvement, all incorporated into an automated effortless workflow, and to characterize the program over a two year period. Methods: A new quantitative uniformity analysis metric was developed based on 2D noise power spectrum metrology and confirmed based on expert observer visual analysis. The metric, termed Structured Noise Index (SNI) was then integrated into an automated program to analyze, archive, and report on daily NM QC uniformity images. The effectiveness of the program was evaluated over a period of 2 years. Results: The SNI metric successfully identified visually apparent non-uniformities overlooked by the pixel valuebased analysis methods. Implementation of the program has resulted in nonuniformity identification in about 12% of daily flood images. In addition, due to the vigilance of staff response, the percentage of days exceeding trigger value shows a decline over time. Conclusion: The SNI provides a robust quantification of the NM performance of gamma camera uniformity. It operates seamlessly across a fleet of multiple camera models. The automated process provides effective workflow within the NM spectra between physicist, technologist, and clinical engineer. The reliability of this process has made it the preferred platform for NM uniformity analysis.« less

  1. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system

    PubMed Central

    Wood, T J; Beavis, A W; Saunderson, J R

    2013-01-01

    Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362

  2. Related Critical Psychometric Issues and Their Resolutions during Development of PE Metrics

    ERIC Educational Resources Information Center

    Fox, Connie; Zhu, Weimo; Park, Youngsik; Fisette, Jennifer L.; Graber, Kim C.; Dyson, Ben; Avery, Marybell; Franck, Marian; Placek, Judith H.; Rink, Judy; Raynes, De

    2011-01-01

    In addition to validity and reliability evidence, other psychometric qualities of the PE Metrics assessments needed to be examined. This article describes how those critical psychometric issues were addressed during the PE Metrics assessment bank construction. Specifically, issues included (a) number of items or assessments needed, (b) training…

  3. Anastomotic leak after colorectal resection: A population-based study of risk factors and hospital variation.

    PubMed

    Nikolian, Vahagn C; Kamdar, Neil S; Regenbogen, Scott E; Morris, Arden M; Byrn, John C; Suwanabol, Pasithorn A; Campbell, Darrell A; Hendren, Samantha

    2017-06-01

    Anastomotic leak is a major source of morbidity in colorectal operations and has become an area of interest in performance metrics. It is unclear whether anastomotic leak is associated primarily with surgeons' technical performance or explained better by patient characteristics and institutional factors. We sought to establish if anastomotic leak could serve as a valid quality metric in colorectal operations by evaluating provider variation after adjusting for patient factors. We performed a retrospective cohort study of colorectal resection patients in the Michigan Surgical Quality Collaborative. Clinically relevant patient and operative factors were tested for association with anastomotic leak. Hierarchical logistic regression was used to derive risk-adjusted rates of anastomotic leak. Of 9,192 colorectal resections, 244 (2.7%) had a documented anastomotic leak. The incidence of anastomotic leak was 3.0% for patients with pelvic anastomoses and 2.5% for those with intra-abdominal anastomoses. Multivariable analysis showed that a greater operative duration, male sex, body mass index >30 kg/m 2 , tobacco use, chronic immunosuppressive medications, thrombocytosis (platelet count >400 × 10 9 /L), and urgent/emergency operations were independently associated with anastomotic leak (C-statistic = 0.75). After accounting for patient and procedural risk factors, 5 hospitals had a significantly greater incidence of postoperative anastomotic leak. This population-based study shows that risk factors for anastomotic leak include male sex, obesity, tobacco use, immunosuppression, thrombocytosis, greater operative duration, and urgent/emergency operation; models including these factors predict most of the variation in anastomotic leak rates. This study suggests that anastomotic leak can serve as a valid metric that can identify opportunities for quality improvement. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A fish-based index of biotic integrity to assess intermittent headwater streams in Wisconsin, USA.

    PubMed

    Lyons, John

    2006-11-01

    I developed a fish-based index of biotic integrity (IBI) to assess environmental quality in intermittent headwater streams in Wisconsin, USA. Backpack electrofishing and habitat surveys were conducted four times on 102 small (watershed area 1.7-41.5 km(2)), cool or warmwater (maximum daily mean water temperature > or = 22 C), headwater streams in spring and late summer/fall 2000 and 2001. Despite seasonal and annual changes in stream flow and habitat volume, there were few significant temporal trends in fish attributes. Analysis of 36 least-impacted streams indicated that fish were too scarce to calculate an IBI at stations with watershed areas less than 4 km(2) or at stations with watershed areas from 4-10 km(2) if stream gradient exceeded 10 m/km (1% slope). For streams with sufficient fish, potential fish attributes (metrics) were not related to watershed size or gradient. Seven metrics distinguished among streams with low, agricultural, and urban human impacts: numbers of native, minnow (Cyprinidae), headwater-specialist, and intolerant (to environmental degradation) species; catches of all fish excluding species tolerant of environmental degradation and of brook stickleback (Culaea inconstans) per 100 m stream length; and percentage of total individuals with deformities, eroded fins, lesions, or tumors. These metrics were used in the final IBI, which ranged from 0 (worst) to 100 (best). The IBI accurately assessed the environmental quality of 16 randomly chosen streams not used in index development. Temporal variation in IBI scores in the absence of changes in environmental quality was not related to season, year, or type of human impact and was similar in magnitude to variation reported for other IBI's.

  5. Benefits of utilizing CellProfiler as a characterization tool for U-10Mo nuclear fuel

    DOE PAGES

    Collette, R.; Douglas, J.; Patterson, L.; ...

    2015-05-01

    Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium-molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries.« less

  6. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)

    PubMed Central

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22053864

  7. Recommendations for Mass Spectrometry Data Quality Metrics for Open Access Data (Corollary to the Amsterdam Principles)*

    PubMed Central

    Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry

    2011-01-01

    Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993

  8. Tools for observational gait analysis in patients with stroke: a systematic review.

    PubMed

    Ferrarello, Francesco; Bianchi, Valeria Anna Maria; Baccini, Marco; Rubbieri, Gaia; Mossello, Enrico; Cavallini, Maria Chiara; Marchionni, Niccolò; Di Bari, Mauro

    2013-12-01

    Stroke severely affects walking ability, and assessment of gait kinematics is important in defining diagnosis, planning treatment, and evaluating interventions in stroke rehabilitation. Although observational gait analysis is the most common approach to evaluate gait kinematics, tools useful for this purpose have received little attention in the scientific literature and have not been thoroughly reviewed. The aims of this systematic review were to identify tools proposed to conduct observational gait analysis in adults with a stroke, to summarize evidence concerning their quality, and to assess their implementation in rehabilitation research and clinical practice. An extensive search was performed of original articles reporting on visual/observational tools developed to investigate gait kinematics in adults with a stroke. Two reviewers independently selected studies, extracted data, assessed quality of the included studies, and scored the metric properties and clinical utility of each tool. Rigor in reporting metric properties and dissemination of the tools also was evaluated. Five tools were identified, not all of which had been tested adequately for their metric properties. Evaluation of content validity was partially satisfactory. Reliability was poorly investigated in all but one tool. Concurrent validity and sensitivity to change were shown for 3 and 2 tools, respectively. Overall, adequate levels of quality were rarely reached. The dissemination of the tools was poor. Based on critical appraisal, the Gait Assessment and Intervention Tool shows a good level of quality, and its use in stroke rehabilitation is recommended. Rigorous studies are needed for the other tools in order to establish their usefulness.

  9. An algal model for predicting attainment of tiered biological criteria of Maine's streams and rivers

    USGS Publications Warehouse

    Danielson, Thomas J.; Loftin, Cyndy; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth; Courtemanch, David L.; Drummond, Francis; Davies, Susan

    2012-01-01

    State water-quality professionals developing new biological assessment methods often have difficulty relating assessment results to narrative criteria in water-quality standards. An alternative to selecting index thresholds arbitrarily is to include the Biological Condition Gradient (BCG) in the development of the assessment method. The BCG describes tiers of biological community condition to help identify and communicate the position of a water body along a gradient of water quality ranging from natural to degraded. Although originally developed for fish and macroinvertebrate communities of streams and rivers, the BCG is easily adapted to other habitats and taxonomic groups. We developed a discriminant analysis model with stream algal data to predict attainment of tiered aquatic-life uses in Maine's water-quality standards. We modified the BCG framework for Maine stream algae, related the BCG tiers to Maine's tiered aquatic-life uses, and identified appropriate algal metrics for describing BCG tiers. Using a modified Delphi method, 5 aquatic biologists independently evaluated algal community metrics for 230 samples from streams and rivers across the state and assigned a BCG tier (1–6) and Maine water quality class (AA/A, B, C, nonattainment of any class) to each sample. We used minimally disturbed reference sites to approximate natural conditions (Tier 1). Biologist class assignments were unanimous for 53% of samples, and 42% of samples differed by 1 class. The biologists debated and developed consensus class assignments. A linear discriminant model built to replicate a priori class assignments correctly classified 95% of 150 samples in the model training set and 91% of 80 samples in the model validation set. Locally derived metrics based on BCG taxon tolerance groupings (e.g., sensitive, intermediate, tolerant) were more effective than were metrics developed in other regions. Adding the algal discriminant model to Maine's existing macroinvertebrate discriminant model will broaden detection of biological impairment and further diagnose sources of impairment. The algal discriminant model is specific to Maine, but our approach of explicitly tying an assessment tool to tiered aquatic-life goals is widely transferrable to other regions, taxonomic groups, and waterbody types.

  10. Bayesian performance metrics of binary sensors in homeland security applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Forrester, Thomas C.

    2008-04-01

    Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.

  11. Developments in Seismic Data Quality Assessment Using MUSTANG at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Sharer, G.; Keyson, L.; Templeton, M. E.; Weertman, B.; Smith, K.; Sweet, J. R.; Tape, C.; Casey, R. E.; Ahern, T.

    2017-12-01

    MUSTANG is the automated data quality metrics system at the IRIS Data Management Center (DMC), designed to help characterize data and metadata "goodness" across the IRIS data archive, which holds 450 TB of seismic and related earth science data spanning the past 40 years. It calculates 46 metrics ranging from sample statistics and miniSEED state-of-health flag counts to Power Spectral Densities (PSDs) and Probability Density Functions (PDFs). These quality measurements are easily and efficiently accessible to users through the use of web services, which allows users to make requests not only by station and time period but also to filter the results according to metric values that match a user's data requirements. Results are returned in a variety of formats, including XML, JSON, CSV, and text. In the case of PSDs and PDFs, results can also be retrieved as plot images. In addition, there are several user-friendly client tools available for exploring and visualizing MUSTANG metrics: LASSO, MUSTANG Databrowser, and MUSTANGular. Over the past year we have made significant improvements to MUSTANG. We have nearly complete coverage over our archive for broadband channels with sample rates of 20-200 sps. With this milestone achieved, we are now expanding to include higher sample rate, short-period, and strong-motion channels. Data availability metrics will soon be calculated when a request is made which guarantees that the information reflects the current state of the archive and also allows for more flexibility in content. For example, MUSTANG will be able to return a count of gaps for any arbitrary time period instead of being limited to 24 hour spans. We are also promoting the use of data quality metrics beyond the IRIS archive through our recent release of ISPAQ, a Python command-line application that calculates MUSTANG-style metrics for users' local miniSEED files or for any miniSEED data accessible through FDSN-compliant web services. Finally, we will explore how researchers are using MUSTANG in real-world situations to select data, improve station data quality, anticipate station outages and servicing, and characterize site noise and environmental conditions.

  12. Accounting for both local aquatic community composition and bioavailability in setting site-specific quality standards for zinc.

    PubMed

    Peters, Adam; Simpson, Peter; Moccia, Alessandra

    2014-01-01

    Recent years have seen considerable improvement in water quality standards (QS) for metals by taking account of the effect of local water chemistry conditions on their bioavailability. We describe preliminary efforts to further refine water quality standards, by taking account of the composition of the local ecological community (the ultimate protection objective) in addition to bioavailability. Relevance of QS to the local ecological community is critical as it is important to minimise instances where quality classification using QS does not reconcile with a quality classification based on an assessment of the composition of the local ecology (e.g. using benthic macroinvertebrate quality assessment metrics such as River InVertebrate Prediction and Classification System (RIVPACS)), particularly where ecology is assessed to be at good or better status, whilst chemical quality is determined to be failing relevant standards. The alternative approach outlined here describes a method to derive a site-specific species sensitivity distribution (SSD) based on the ecological community which is expected to be present at the site in the absence of anthropogenic pressures (reference conditions). The method combines a conventional laboratory ecotoxicity dataset normalised for bioavailability with field measurements of the response of benthic macroinvertebrate abundance to chemical exposure. Site-specific QSref are then derived from the 5%ile of this SSD. Using this method, site QSref have been derived for zinc in an area impacted by historic mining activities. Application of QSref can result in greater agreement between chemical and ecological metrics of environmental quality compared with the use of either conventional (QScon) or bioavailability-based QS (QSbio). In addition to zinc, the approach is likely to be applicable to other metals and possibly other types of chemical stressors (e.g. pesticides). However, the methodology for deriving site-specific targets requires additional development and validation before they can be robustly applied during surface water classification.

  13. Feasibility of Turing-Style Tests for Autonomous Aerial Vehicle "Intelligence"

    NASA Technical Reports Server (NTRS)

    Young, Larry A.

    2007-01-01

    A new approach is suggested to define and evaluate key metrics as to autonomous aerial vehicle performance. This approach entails the conceptual definition of a "Turing Test" for UAVs. Such a "UAV Turing test" would be conducted by means of mission simulations and/or tailored flight demonstrations of vehicles under the guidance of their autonomous system software. These autonomous vehicle mission simulations and flight demonstrations would also have to be benchmarked against missions "flown" with pilots/human-operators in the loop. In turn, scoring criteria for such testing could be based upon both quantitative mission success metrics (unique to each mission) and by turning to analog "handling quality" metrics similar to the well-known Cooper-Harper pilot ratings used for manned aircraft. Autonomous aerial vehicles would be considered to have successfully passed this "UAV Turing Test" if the aggregate mission success metrics and handling qualities for the autonomous aerial vehicle matched or exceeded the equivalent metrics for missions conducted with pilots/human-operators in the loop. Alternatively, an independent, knowledgeable observer could provide the "UAV Turing Test" ratings of whether a vehicle is autonomous or "piloted." This observer ideally would, in the more sophisticated mission simulations, also have the enhanced capability of being able to override the scripted mission scenario and instigate failure modes and change of flight profile/plans. If a majority of mission tasks are rated as "piloted" by the observer, when in reality the vehicle/simulation is fully- or semi- autonomously controlled, then the vehicle/simulation "passes" the "UAV Turing Test." In this regards, this second "UAV Turing Test" approach is more consistent with Turing s original "imitation game" proposal. The overall feasibility, and important considerations and limitations, of such an approach for judging/evaluating autonomous aerial vehicle "intelligence" will be discussed from a theoretical perspective.

  14. Strategic Agility: Using the Expeditionary Aerospace Force as a Framework for Assuring Strategic Relevancy in the USAF

    DTIC Science & Technology

    2014-06-01

    increases quality of life , which, in turn, leads to better retention metrics; better retention metrics translate into higher experience levels...the quality of life for Airmen, particularly two-parent military families assigned to different AEFs.46 Cognizant of an already high operations...a desire to achieve the highest quality of life for Airmen. Ryan settled on a 1:4 AEF dwell ratio to ensure Airmen were not away from home- station

  15. Getting started on metrics - Jet Propulsion Laboratory productivity and quality

    NASA Technical Reports Server (NTRS)

    Bush, M. W.

    1990-01-01

    A review is presented to describe the effort and difficulties of reconstructing fifteen years of JPL software history. In 1987 the collection and analysis of project data were started with the objective of creating laboratory-wide measures of quality and productivity for software development. As a result of this two-year Software Product Assurance metrics study, a rough measurement foundation for software productivity and software quality, and an order-of-magnitude quantitative baseline for software systems and subsystems are now available.

  16. Lessons for Broadening School Accountability under the Every Student Succeeds Act. Strategy Paper

    ERIC Educational Resources Information Center

    Schanzenbach, Diane Whitmore; Bauer, Lauren; Mumford, Megan

    2016-01-01

    A quality education that promotes learning among all students is a prerequisite for an economy that increases opportunity, prosperity, and growth. School accountability policies, in which school performance is evaluated based on identified metrics, have developed over the past few decades as a strategy central to assessing and achieving progress…

  17. Development of a multimetric index based on benthic macroinvertebrates for the assessment of urban stream health in Jinan City, China.

    PubMed

    Liu, Linfei; Xu, Zongxue; Yin, Xuwang; Li, Fulin; Dou, Tongwen

    2017-05-01

    Assessment of the health of urban streams is an important theoretical and practical topic, which is related to the impacts of physiochemical processes, hydrological modifications, and the biological community. However, previous assessments of the urban water quality were predominantly conducted by measuring physical and chemical factors rather than biological monitoring. The purpose of this study was to develop an urban stream multimetric index (USMI) based on benthic macroinvertebrates to assess the health of aquatic ecosystem in Jinan City. Two hundred and eighty-eight samples were collected during two consecutive years (2014-2015) from 48 sites located within the city. Metrics related to the benthic macroinvertebrate richness, diversity, composition and abundance, and functional feeding groups were selected by using box-plots and the Kruskal-Wallis test. The final index derived from selected metrics was divided into five river quality classes (excellent, good, moderate, poor, and bad). A validation procedure using box-plots and the non-parametric Mann-Whitney U test showed that the USMI was useful to assess the health of urban streams.

  18. Patient Protection and Affordable Care Act: Potential Effects on Physical Medicine and Rehabilitation

    PubMed Central

    Boninger, Joseph W.; Gans, Bruce M.; Chan, Leighton

    2012-01-01

    The objective was to review pertinent areas of the Patient Protection and Affordable Care Act (PPACA) to determine the PPACA’s impact on physical medicine and rehabilitation (PM&R). The law, and related newspaper and magazine articles, was reviewed. The ways in which provisions in the PPACA are being implemented by the Centers for Medicare and Medicaid Services and other government organizations were investigated. Additionally, recent court rulings on the PPACA were analyzed to assess the law’s chances of successful implementation. The PPACA contains a variety of reforms that, if implemented, will significantly impact the field of PM&R. Many PPACA reforms change how rehabilitative care is delivered by integrating different levels of care and creating uniform quality metrics to assess quality and efficiency. These quality metrics will ultimately be tied to new, performance-based payment systems. While the law contains ambitious initiatives that may, if unsuccessful or incorrectly implemented, negatively impact PM&R, it also has the potential to greatly improve the quality and efficiency of rehabilitative care. A proactive approach to the changes the PPACA will bring about is essential for the health of the field. PMID:22459177

  19. A New Look at Data Usage by Using Metadata Attributes as Indicators of Data Quality

    NASA Astrophysics Data System (ADS)

    Won, Y. I.; Wanchoo, L.; Behnke, J.

    2016-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) stores and distributes data from EOS satellites, as well as ancillary, airborne, in-situ, and socio-economic data. Twelve EOSDIS data centers support different scientific disciplines by providing products and services tailored to specific science communities. Although discipline oriented, these data centers provide common data management functions of ingest, archive and distribution, as well as documentation of their data and services on their web-sites. The Earth Science Data and Information System (ESDIS) Project collects these metrics from the EOSDIS data centers on a daily basis through a tool called the ESDIS Metrics System (EMS). These metrics are used in this study. The implementation of the Earthdata Login - formerly known as the User Registration System (URS) - across the various NASA data centers provides the EMS additional information about users obtaining data products from EOSDIS data centers. These additional user attributes collected by the Earthdata login, such as the user's primary area of study can augment the understanding of data usage, which in turn can help the EOSDIS program better understand the users' needs. This study will review the key metrics (users, distributed volume, and files) in multiple ways to gain an understanding of the significance of the metadata. Characterizing the usability of data by key metadata elements such as discipline and study area, will assist in understanding how the users have evolved over time. The data usage pattern based on version numbers may also provide some insight into the level of data quality. In addition, the data metrics by various services such as the Open-source Project for a Network Data Access Protocol (OPeNDAP), Web Map Service (WMS), Web Coverage Service (WCS), and subsets, will address how these services have extended the usage of data. Over-all, this study will present the usage of data and metadata by metrics analyses and will assist data centers in better supporting the needs of the users.

  20. ICU Director Data

    PubMed Central

    Ogbu, Ogbonna C.; Coopersmith, Craig M.

    2015-01-01

    Improving value within critical care remains a priority because it represents a significant portion of health-care spending, faces high rates of adverse events, and inconsistently delivers evidence-based practices. ICU directors are increasingly required to understand all aspects of the value provided by their units to inform local improvement efforts and relate effectively to external parties. A clear understanding of the overall process of measuring quality and value as well as the strengths, limitations, and potential application of individual metrics is critical to supporting this charge. In this review, we provide a conceptual framework for understanding value metrics, describe an approach to developing a value measurement program, and summarize common metrics to characterize ICU value. We first summarize how ICU value can be represented as a function of outcomes and costs. We expand this equation and relate it to both the classic structure-process-outcome framework for quality assessment and the Institute of Medicine’s six aims of health care. We then describe how ICU leaders can develop their own value measurement process by identifying target areas, selecting appropriate measures, acquiring the necessary data, analyzing the data, and disseminating the findings. Within this measurement process, we summarize common metrics that can be used to characterize ICU value. As health care, in general, and critical care, in particular, changes and data become more available, it is increasingly important for ICU leaders to understand how to effectively acquire, evaluate, and apply data to improve the value of care provided to patients. PMID:25846533

  1. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    NASA Astrophysics Data System (ADS)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  2. Comparative study of probability distribution distances to define a metric for the stability of multi-source biomedical research data.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-01-01

    Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.

  3. The carbohydrate-fat problem: can we construct a healthy diet based on dietary guidelines?

    PubMed

    Drewnowski, Adam

    2015-05-01

    The inclusion of nutrition economics in dietary guidance would help ensure that the Dietary Guidelines for Americans benefit equally all segments of the US population. The present review outlines some novel metrics of food affordability that assess nutrient density of foods and beverages in relation to cost. Socioeconomic disparities in diet quality in the United States are readily apparent. In general, groups of lower socioeconomic status consume cheaper, lower-quality diets and suffer from higher rates of noncommunicable diseases. Nutrient profiling models, initially developed to assess the nutrient density of foods, can be turned into econometric models that assess both calories and nutrients per reference amount and per unit cost. These novel metrics have been used to identify individual foods that were affordable, palatable, culturally acceptable, and nutrient rich. Not all nutrient-rich foods were expensive. In dietary surveys, both local and national, some high-quality diets were associated with relatively low cost. Those population subgroups that successfully adopted dietary guidelines at an unexpectedly low monetary cost were identified as "positive deviants." Constructing a healthy diet based on dietary guidelines can be done, provided that nutrient density of foods, their affordability, as well as taste and social norms are all taken into account. © 2015 American Society for Nutrition.

  4. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression.

    PubMed

    Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia

    2014-05-17

    High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository.

  5. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression

    PubMed Central

    2014-01-01

    Background High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Results Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the “common reference design” and processed as “pseudo-single-channel”. They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription–polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Conclusions Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository. PMID:24886675

  6. Geostatistical Prediction of Microbial Water Quality Throughout a Stream Network Using Meteorology, Land Cover, and Spatiotemporal Autocorrelation.

    PubMed

    Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R

    2018-06-25

    Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.

  7. The Assignment of Scale to Object-Oriented Software Measures

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.; Weistroffer, H. Roland; Coppins, Richard J.

    1997-01-01

    In order to improve productivity (and quality), measurement of specific aspects of software has become imperative. As object oriented programming languages have become more widely used, metrics designed specifically for object-oriented software are required. Recently a large number of new metrics for object- oriented software has appeared in the literature. Unfortunately, many of these proposed metrics have not been validated to measure what they purport to measure. In this paper fifty (50) of these metrics are analyzed.

  8. Roles for specialty societies and vascular surgeons in accountable care organizations.

    PubMed

    Goodney, Philip P; Fisher, Elliott S; Cambria, Richard P

    2012-03-01

    With the passage of the Affordable Care Act, accountable care organizations (ACOs) represent a new paradigm in healthcare payment reform. Designed to limit growth in spending while preserving quality, these organizations aim to incant physicians to lower costs by returning a portion of the savings realized by cost-effective, evidence-based care back to the ACO. In this review, first, we will explore the development of ACOs within the context of prior attempts to control Medicare spending, such as the sustainable growth rate and managed care organizations. Second, we describe the evolution of ACOs, the demonstration projects that established their feasibility, and their current organizational structure. Third, because quality metrics are central to the use and implementation of ACOs, we describe current efforts to design, collect, and interpret quality metrics in vascular surgery. And fourth, because a "seat at the table" will be an important key to success for vascular surgeons in these efforts, we discuss how vascular surgeons can participate and lead efforts within ACOs. Copyright © 2012 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  9. A Comparison of Evaluation Metrics for Biomedical Journals, Articles, and Websites in Terms of Sensitivity to Topic

    PubMed Central

    Fu, Lawrence D.; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F.

    2011-01-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed’s clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. PMID:21419864

  10. Perceptual color difference metric including a CSF based on the perception threshold

    NASA Astrophysics Data System (ADS)

    Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2008-01-01

    The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.

  11. John F. Kennedy Space Center, Safety, Reliability, Maintainability and Quality Assurance, Survey and Audit Program

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This document is the product of the KSC Survey and Audit Working Group composed of civil service and contractor Safety, Reliability, and Quality Assurance (SR&QA) personnel. The program described herein provides standardized terminology, uniformity of survey and audit operations, and emphasizes process assessments rather than a program based solely on compliance. The program establishes minimum training requirements, adopts an auditor certification methodology, and includes survey and audit metrics for the audited organizations as well as the auditing organization.

  12. Application of furniture images selection based on neural network

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Gao, Wenwen; Wang, Ying

    2018-05-01

    In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.

  13. Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.

    PubMed

    Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan

    2016-12-14

    Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.

  14. Responses of macroinvertebrate community metrics to a wastewater discharge in the Upper Blue River of Kansas and Missouri, USA

    USGS Publications Warehouse

    Poulton, Barry C.; Graham, Jennifer L.; Rasmussen, Teresa J.; Stone, Mandy L.

    2015-01-01

    The Blue River Main wastewater treatment facility (WWTF) discharges into the upper Blue River (725 km2), and is recently upgraded to implement biological nutrient removal. We measured biotic condition upstream and downstream of the discharge utilizing the macroinvertebrate protocol developed for Kansas streams. We examined responses of 34 metrics to determine the best indicators for discriminating site differences and for predicting biological condition. Significant differences between sites upstream and downstream of the discharge were identified for 15 metrics in April and 12 metrics in August. Upstream biotic condition scores were significantly greater than scores at both downstream sites in April (p = 0.02), and in August the most downstream site was classified as non-biologically supporting. Thirteen EPT taxa (Ephemeroptera, Plecoptera, Trichoptera) considered intolerant of degraded stream quality were absent at one or both downstream sites. Increases in tolerance metrics and filtering macroinvertebrates, and a decline in ratio of scrapers to filterers all indicated effects of increased nutrient enrichment. Stepwise regressions identified several significant models containing a suite of metrics with low redundancy (R2 = 0.90 - 0.99). Based on the rapid decline in biological condition downstream of the discharge, the level of nutrient removal resulting from the facility upgrade (10% - 20%) was not enough to mitigate negative effects on macroinvertebrate communities.

  15. Assessing the link between coastal urbanization and the quality of nekton habitat in mangrove tidal tributaries

    USGS Publications Warehouse

    Krebs, Justin M.; Bell, Susan S.; McIvor, Carole C.

    2014-01-01

    To assess the potential influence of coastal development on habitat quality for estuarine nekton, we characterized body condition and reproduction for common nekton from tidal tributaries classified as undeveloped, industrial, urban or man-made (i.e., mosquito-control ditches). We then evaluated these metrics of nekton performance, along with several abundance-based metrics and community structure from a companion paper (Krebs et al. 2013) to determine which metrics best reflected variation in land-use and in-stream habitat among tributaries. Body condition was not significantly different among undeveloped, industrial, and man-made tidal tributaries for six of nine taxa; however, three of those taxa were in significantly better condition in urban compared to undeveloped tributaries. Palaemonetes shrimp were the only taxon in significantly poorer condition in urban tributaries. For Poecilia latipinna, there was no difference in body condition (length–weight) between undeveloped and urban tributaries, but energetic condition was significantly better in urban tributaries. Reproductive output was reduced for both P. latipinna (i.e., fecundity) and grass shrimp (i.e., very low densities, few ovigerous females) in urban tributaries; however a tradeoff between fecundity and offspring size confounded meaningful interpretation of reproduction among land-use classes for P. latipinna. Reproductive allotment by P. latipinna did not differ significantly among land-use classes. Canonical correspondence analysis differentiated urban and non-urban tributaries based on greater impervious surface, less natural mangrove shoreline, higher frequency of hypoxia and lower, more variable salinities in urban tributaries. These characteristics explained 36 % of the variation in nekton performance, including high densities of poeciliid fishes, greater energetic condition of sailfin mollies, and low densities of several common nekton and economically important taxa from urban tributaries. While variation among tributaries in our study can be largely explained by impervious surface beyond the shorelines of the tributary, variation in nekton metrics among non-urban tributaries was better explained by habitat factors within the tributary and along the shorelines. Our results support the paradigm that urban development in coastal areas has the potential to alter habitat quality in small tidal tributaries as reflected by variation in nekton performance among tributaries from representative land-use classes.

  16. Health workforce metrics pre- and post-2015: a stimulus to public policy and planning.

    PubMed

    Pozo-Martin, Francisco; Nove, Andrea; Lopes, Sofia Castro; Campbell, James; Buchan, James; Dussault, Gilles; Kunjumen, Teena; Cometto, Giorgio; Siyam, Amani

    2017-02-15

    Evidence-based health workforce policies are essential to ensure the provision of high-quality health services and to support the attainment of universal health coverage (UHC). This paper describes the main characteristics of available health workforce data for 74 of the 75 countries identified under the 'Countdown to 2015' initiative as accounting for more than 95% of the world's maternal, newborn and child deaths. It also discusses best practices in the development of health workforce metrics post-2015. Using available health workforce data from the Global Health Workforce Statistics database from the Global Health Observatory, we generated descriptive statistics to explore the current status, recent trends in the number of skilled health professionals (SHPs: physicians, nurses, midwives) per 10 000 population, and future requirements to achieve adequate levels of health care in the 74 countries. A rapid literature review was conducted to obtain an overview of the types of methods and the types of data sources used in human resources for health (HRH) studies. There are large intercountry and interregional differences in the density of SHPs to progress towards UHC in Countdown countries: a median of 10.2 per 10 000 population with range 1.6 to 142 per 10 000. Substantial efforts have been made in some countries to increase the availability of SHPs as shown by a positive average exponential growth rate (AEGR) in SHPs in 51% of Countdown countries for which there are data. Many of these countries will require large investments to achieve levels of workforce availability commensurate with UHC and the health-related sustainable development goals (SDGs). The availability, quality and comparability of global health workforce metrics remain limited. Most published workforce studies are descriptive, but more sophisticated needs-based workforce planning methods are being developed. There is a need for high-quality, comprehensive, interoperable sources of HRH data to support all policies towards UHC and the health-related SDGs. The recent WHO-led initiative of supporting countries in the development of National Health Workforce Accounts is a very promising move towards purposive health workforce metrics post-2015. Such data will allow more countries to apply the latest methods for health workforce planning.

  17. Requirement Metrics for Risk Identification

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  18. Radiology metrics for safe use and regulatory compliance with CT imaging

    NASA Astrophysics Data System (ADS)

    Paden, Robert; Pavlicek, William

    2018-03-01

    The MACRA Act creates a Merit-Based Payment System, with monitoring patient exposure from CT providing one possible quality metric for meeting merit requirements. Quality metrics are also required by The Joint Commission, ACR, and CMS as facilities are tasked to perform reviews of CT irradiation events outside of expected ranges, review protocols for appropriateness, and validate parameters for low dose lung cancer screening. In order to efficiently collect and analyze irradiation events and associated DICOM tags, all clinical CT devices were DICOM connected to a parser which extracted dose related information for storage into a database. Dose data from every exam is compared to the appropriate external standard exam type. AAPM recommended CTDIvol values for head and torso, adult and pediatrics, coronary and perfusion exams are used for this study. CT doses outside the expected range were automatically formatted into a report for analysis and review documentation. CT Technologist textual content, the reason for proceeding with an irradiation above the recommended threshold, is captured for inclusion in the follow up reviews by physics staff. The use of a knowledge based approach in labeling individual protocol and device settings is a practical solution resulting in efficiency of analysis and review. Manual methods would require approximately 150 person-hours for our facility, exclusive of travel time and independent of device availability. An efficiency of 89% time savings occurs through use of this informatics tool including a low dose CT comparison review and low dose lung cancer screening requirements set forth by CMS.

  19. Development of a mission-based funding model for undergraduate medical education: incorporation of quality.

    PubMed

    Stagnaro-Green, Alex; Roe, David; Soto-Greene, Maria; Joffe, Russell

    2008-01-01

    Increasing financial pressures, along with a desire to realign resources with institutional priorities, has resulted in the adoption of mission-based funding (MBF) at many medical schools. The lack of inclusion of quality and the time and expense in developing and implementing mission based funding are major deficiencies in the models reported to date. In academic year 2002-2003 New Jersey Medical School developed a model that included both quantity and quality in the education metric and that was departmentally based. Eighty percent of the undergraduate medical education allocation was based on the quantity of undergraduate medical education taught by the department ($7.35 million), and 20% ($1.89 million) was allocated based on the quality of the education delivered. Quality determinations were made by the educational leadership based on student evaluations and departmental compliance with educational administrative requirements. Evolution of the model has included the development of a faculty oversight committee and the integration of peer evaluation in the determination of educational quality. Six departments had a documented increase in quality over time, and one department had a transient decrease in quality. The MBF model has been well accepted by chairs, educational leaders, and faculty and has been instrumental in enhancing the stature of education at our institution.

  20. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490

  1. An adaptive handover prediction scheme for seamless mobility based wireless networks.

    PubMed

    Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.

  2. Development of a clinician reputation metric to identify appropriate problem-medication pairs in a crowdsourced knowledge base.

    PubMed

    McCoy, Allison B; Wright, Adam; Rogith, Deevakar; Fathiamini, Safa; Ottenbacher, Allison J; Sittig, Dean F

    2014-04-01

    Correlation of data within electronic health records is necessary for implementation of various clinical decision support functions, including patient summarization. A key type of correlation is linking medications to clinical problems; while some databases of problem-medication links are available, they are not robust and depend on problems and medications being encoded in particular terminologies. Crowdsourcing represents one approach to generating robust knowledge bases across a variety of terminologies, but more sophisticated approaches are necessary to improve accuracy and reduce manual data review requirements. We sought to develop and evaluate a clinician reputation metric to facilitate the identification of appropriate problem-medication pairs through crowdsourcing without requiring extensive manual review. We retrieved medications from our clinical data warehouse that had been prescribed and manually linked to one or more problems by clinicians during e-prescribing between June 1, 2010 and May 31, 2011. We identified measures likely to be associated with the percentage of accurate problem-medication links made by clinicians. Using logistic regression, we created a metric for identifying clinicians who had made greater than or equal to 95% appropriate links. We evaluated the accuracy of the approach by comparing links made by those physicians identified as having appropriate links to a previously manually validated subset of problem-medication pairs. Of 867 clinicians who asserted a total of 237,748 problem-medication links during the study period, 125 had a reputation metric that predicted the percentage of appropriate links greater than or equal to 95%. These clinicians asserted a total of 2464 linked problem-medication pairs (983 distinct pairs). Compared to a previously validated set of problem-medication pairs, the reputation metric achieved a specificity of 99.5% and marginally improved the sensitivity of previously described knowledge bases. A reputation metric may be a valuable measure for identifying high quality clinician-entered, crowdsourced data. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Development of a clinician reputation metric to identify appropriate problem-medication pairs in a crowdsourced knowledge base

    PubMed Central

    McCoy, Allison B.; Wright, Adam; Rogith, Deevakar; Fathiamini, Safa; Ottenbacher, Allison J.; Sittig, Dean F.

    2014-01-01

    Background Correlation of data within electronic health records is necessary for implementation of various clinical decision support functions, including patient summarization. A key type of correlation is linking medications to clinical problems; while some databases of problem-medication links are available, they are not robust and depend on problems and medications being encoded in particular terminologies. Crowdsourcing represents one approach to generating robust knowledge bases across a variety of terminologies, but more sophisticated approaches are necessary to improve accuracy and reduce manual data review requirements. Objective We sought to develop and evaluate a clinician reputation metric to facilitate the identification of appropriate problem-medication pairs through crowdsourcing without requiring extensive manual review. Approach We retrieved medications from our clinical data warehouse that had been prescribed and manually linked to one or more problems by clinicians during e-prescribing between June 1, 2010 and May 31, 2011. We identified measures likely to be associated with the percentage of accurate problem-medication links made by clinicians. Using logistic regression, we created a metric for identifying clinicians who had made greater than or equal to 95% appropriate links. We evaluated the accuracy of the approach by comparing links made by those physicians identified as having appropriate links to a previously manually validated subset of problem-medication pairs. Results Of 867 clinicians who asserted a total of 237,748 problem-medication links during the study period, 125 had a reputation metric that predicted the percentage of appropriate links greater than or equal to 95%. These clinicians asserted a total of 2464 linked problem-medication pairs (983 distinct pairs). Compared to a previously validated set of problem-medication pairs, the reputation metric achieved a specificity of 99.5% and marginally improved the sensitivity of previously described knowledge bases. Conclusion A reputation metric may be a valuable measure for identifying high quality clinician-entered, crowdsourced data. PMID:24321170

  4. Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.

    PubMed

    Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S

    2017-11-01

    To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  5. Decomposition-based transfer distance metric learning for image classification.

    PubMed

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao

    2014-09-01

    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  6. Evaluation of ride quality prediction methods for operational military helicopters

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.

    1984-01-01

    The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots' discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.

  7. Universal health coverage in Rwanda: dream or reality.

    PubMed

    Nyandekwe, Médard; Nzayirambaho, Manassé; Baptiste Kakoma, Jean

    2014-01-01

    Universal Health Coverage (UHC) has been a global concern for a long time and even more nowadays. While a number of publications are almost unanimous that Rwanda is not far from UHC, very few have focused on its financial sustainability and on its extreme external financial dependency. The objectives of this study are: (i) To assess Rwanda UHC based mainly on Community-Based Health Insurance (CBHI) from 2000 to 2012; (ii) to inform policy makers about observed gaps for a better way forward. A retrospective (2000-2012) SWOT analysis was applied to six metrics as key indicators of UHC achievement related to WHO definition, i.e. (i) health insurance and access to care, (ii) equity, (iii) package of services, (iv) rights-based approach, (v) quality of health care, (vi) financial-risk protection, and (vii) CBHI self-financing capacity (SFC) was added by the authors. The first metric with 96,15% of overall health insurance coverage and 1.07 visit per capita per year versus 1 visit recommended by WHO, the second with 24,8% indigent people subsidized versus 24,1% living in extreme poverty, the third, the fourth, and the fifth metrics excellently performing, the sixth with 10.80% versus ≤40% as limit acceptable of catastrophic health spending level and lastly the CBHI SFC i.e. proper cost recovery estimated at 82.55% in 2011/2012, Rwanda UHC achievements are objectively convincing. Rwanda UHC is not a dream but a reality if we consider all convincing results issued of the seven metrics.

  8. Auralization of NASA N+2 Aircraft Concepts from System Noise Predictions

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Burley, Casey L.; Thomas, Russel H.

    2016-01-01

    Auralization of aircraft flyover noise provides an auditory experience that complements integrated metrics obtained from system noise predictions. Recent efforts have focused on auralization methods development, specifically the process by which source noise information obtained from semi-empirical models, computational aeroacoustic analyses, and wind tunnel and flight test data, are used for simulated flyover noise at a receiver on the ground. The primary focus of this work, however, is to develop full vehicle auralizations in order to explore the distinguishing features of NASA's N+2 aircraft vis-à-vis current fleet reference vehicles for single-aisle and large twin-aisle classes. Some features can be seen in metric time histories associated with aircraft noise certification, e.g., tone-corrected perceived noise level used in the calculation of effective perceived noise level. Other features can be observed in sound quality metrics, e.g., loudness, sharpness, roughness, fluctuation strength and tone-to-noise ratio. A psychoacoustic annoyance model is employed to establish the relationship between sound quality metrics and noise certification metrics. Finally, the auralizations will serve as the basis for a separate psychoacoustic study aimed at assessing how well aircraft noise certification metrics predict human annoyance for these advanced vehicle concepts.

  9. Analytical approaches used in stream benthic macroinvertebrate biomonitoring programs of State agencies in the United States

    USGS Publications Warehouse

    Carter, James L.; Resh, Vincent H.

    2013-01-01

    Biomonitoring programs based on benthic macroinvertebrates are well-established worldwide. Their value, however, depends on the appropriateness of the analytical techniques used. All United States State, benthic macroinvertebrate biomonitoring programs were surveyed regarding the purposes of their programs, quality-assurance and quality-control procedures used, habitat and water-chemistry data collected, treatment of macroinvertebrate data prior to analysis, statistical methods used, and data-storage considerations. State regulatory mandates (59 percent of programs), biotic index development (17 percent), and Federal requirements (15 percent) were the most frequently reported purposes of State programs, with the specific tasks of satisfying the requirements for 305b/303d reports (89 percent), establishment and monitoring of total maximum daily loads, and developing biocriteria being the purposes most often mentioned. Most states establish reference sites (81 percent), but classify them using State-specific methods. The most often used technique for determining the appropriateness of a reference site was Best Professional Judgment (86 percent of these states). Macroinvertebrate samples are almost always collected by using a D-frame net, and duplicate samples are collected from approximately 10 percent of sites for quality assurance and quality control purposes. Most programs have macroinvertebrate samples processed by contractors (53 percent) and have identifications confirmed by a second taxonomist (85 percent). All States collect habitat data, with most using the Rapid Bioassessment Protocol visual-assessment approach, which requires ~1 h/site. Dissolved oxygen, pH, and conductivity are measured in more than 90 percent of programs. Wide variation exists in which taxa are excluded from analyses and the level of taxonomic resolution used. Species traits, such as functional feeding groups, are commonly used (96 percent), as are tolerance values for organic pollution (87 percent). Less often used are tolerance values for metals (28 percent). Benthic data are infrequently modified (34 percent) prior to analysis. Fixed-count subsampling is used widely (83 percent), with the number of organisms sorted ranging from 100 to 600 specimens. Most programs include a step during sample processing to acquire rare taxa (79 percent). Programs calculate from 2 to more than100 different metrics (mean 20), and most formulate a multimetric index (87 percent). Eleven of the 112 metrics reported represent 50 percent of all metrics considered to be useful, and most of these are based on richness or percent composition. Biotic indices and tolerance metrics are most oftenused in the eastern U.S., and functional and habitat-type metrics are most often used in the western U.S. Sixty-nine percent of programs analyze their data in-house, typically performing correlations and regressions, and few use any form of data transformation (34 percent). Fifty-one percent of the programs use multivariate analyses, typically non-metric multi-dimensional scaling. All programs have electronic data storage. Most programs use the Integrated Taxonomic Information System (75 percent) for nomenclature and to update historical data (78 percent). State procedures represent a diversity of biomonitoring approaches which likely compromises comparability among programs. A national-state consensus is needed for: (1) developing methods for the identification of reference conditions and reference sites, (2) standardization in determining and reporting species richness, (3) testing and documenting both the theoretical and mechanistic basis of often-used metrics, (4) development of properly replicated point-source study designs, and (5) curation of benthic macroinvertebrate data, including reference and voucher collections, for successful evaluation of future environmental changes.

  10. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  11. Software quality: Process or people

    NASA Technical Reports Server (NTRS)

    Palmer, Regina; Labaugh, Modenna

    1993-01-01

    This paper will present data related to software development processes and personnel involvement from the perspective of software quality assurance. We examine eight years of data collected from six projects. Data collected varied by project but usually included defect and fault density with limited use of code metrics, schedule adherence, and budget growth information. The data are a blend of AFSCP 800-14 and suggested productivity measures in Software Metrics: A Practioner's Guide to Improved Product Development. A software quality assurance database tool, SQUID, was used to store and tabulate the data.

  12. Evaluating Core Quality for a Mars Sample Return Mission

    NASA Technical Reports Server (NTRS)

    Weiss, D. K.; Budney, C.; Shiraishi, L.; Klein, K.

    2012-01-01

    Sample return missions, including the proposed Mars Sample Return (MSR) mission, propose to collect core samples from scientifically valuable sites on Mars. These core samples would undergo extreme forces during the drilling process, and during the reentry process if the EEV (Earth Entry Vehicle) performed a hard landing on Earth. Because of the foreseen damage to the stratigraphy of the cores, it is important to evaluate each core for rock quality. However, because no core sample return mission has yet been conducted to another planetary body, it remains unclear as to how to assess the cores for rock quality. In this report, we describe the development of a metric designed to quantitatively assess the mechanical quality of any rock cores returned from Mars (or other planetary bodies). We report on the process by which we tested the metric on core samples of Mars analogue materials, and the effectiveness of the core assessment metric (CAM) in assessing rock core quality before and after the cores were subjected to shocking (g forces representative of an EEV landing).

  13. Preliminary comparison of landscape pattern-normalized difference vegetation index (NDVI) relationships to central plains stream conditions

    USGS Publications Warehouse

    Griffith, J.A.; Martinko, E.A.; Whistler, J.L.; Price, K.P.

    2002-01-01

    We explored relationships of water quality parameters with landscape pattern metrics (LPMs), land use-land cover (LULC) proportions, and the advanced very high resolution radiometer (AVHRR) normalized difference vegetation index (NDVI) or NDVI-derived metrics. Stream sites (271) in Nebraska, Kansas, and Missouri were sampled for water quality parameters, the index of biotic integrity, and a habitat index in either 1994 or 1995. Although a combination of LPMs (interspersion and juxtaposition index, patch density, and percent forest) within Ozark Highlands watersheds explained >60% of the variation in levels of nitrite-nitrate nitrogen and conductivity, in most cases the LPMs were not significantly correlated with the stream data. Several problems using landscape pattern metrics were noted: small watersheds having only one or two patches, collinearity with LULC data, and counterintuitive or inconsistent results that resulted from basic differences in land use-land cover patterns among ecoregions or from other factors determining water quality. The amount of variation explained in water quality parameters using multiple regression models that combined LULC and LPMs was generally lower than that from NDVI or vegetation phenology metrics derived from time-series NDVI data. A comparison of LPMs and NDVI indicated that NDVI had greater promise for monitoring landscapes for stream conditions within the study area.

  14. Preliminary comparison of landscape pattern-normalized difference vegetation index (NDVI) relationships to Central Plains stream conditions.

    PubMed

    Griffith, Jerry A; Martinko, Edward A; Whistler, Jerry L; Price, Kevin P

    2002-01-01

    We explored relationships of water quality parameters with landscape pattern metrics (LPMs), land use-land cover (LULC) proportions, and the advanced very high resolution radiometer (AVHRR) normalized difference vegetation index (NDVI) or NDVI-derived metrics. Stream sites (271) in Nebraska, Kansas, and Missouri were sampled for water quality parameters, the index of biotic integrity, and a habitat index in either 1994 or 1995. Although a combination of LPMs (interspersion and juxtaposition index, patch density, and percent forest) within Ozark Highlands watersheds explained >60% of the variation in levels of nitrite-nitrate nitrogen and conductivity, in most cases the LPMs were not significantly correlated with the stream data. Several problems using landscape pattern metrics were noted: small watersheds having only one or two patches, collinearity with LULC data, and counterintuitive or inconsistent results that resulted from basic differences in land use-land cover patterns among ecoregions or from other factors determining water quality. The amount of variation explained in water quality parameters using multiple regression models that combined LULC and LPMs was generally lower than that from NDVI or vegetation phenology metrics derived from time-series NDVI data. A comparison of LPMs and NDVI indicated that NDVI had greater promise for monitoring landscapes for stream conditions within the study area.

  15. Reuse Metrics for Object Oriented Software

    NASA Technical Reports Server (NTRS)

    Bieman, James M.

    1998-01-01

    One way to increase the quality of software products and the productivity of software development is to reuse existing software components when building new software systems. In order to monitor improvements in reuse, the level of reuse must be measured. In this NASA supported project we (1) derived a suite of metrics which quantify reuse attributes for object oriented, object based, and procedural software, (2) designed prototype tools to take these measurements in Ada, C++, Java, and C software, (3) evaluated the reuse in available software, (4) analyzed the relationship between coupling, cohesion, inheritance, and reuse, (5) collected object oriented software systems for our empirical analyses, and (6) developed quantitative criteria and methods for restructuring software to improve reusability.

  16. Health impact metrics for air pollution management strategies.

    PubMed

    Martenies, Sheena E; Wilkins, Donele; Batterman, Stuart A

    2015-12-01

    Health impact assessments (HIAs) inform policy and decision making by providing information regarding future health concerns, and quantitative HIAs now are being used for local and urban-scale projects. HIA results can be expressed using a variety of metrics that differ in meaningful ways, and guidance is lacking with respect to best practices for the development and use of HIA metrics. This study reviews HIA metrics pertaining to air quality management and presents evaluative criteria for their selection and use. These are illustrated in a case study where PM2.5 concentrations are lowered from 10 to 8μg/m(3) in an urban area of 1.8 million people. Health impact functions are used to estimate the number of premature deaths, unscheduled hospitalizations and other morbidity outcomes. The most common metric in recent quantitative HIAs has been the number of cases of adverse outcomes avoided. Other metrics include time-based measures, e.g., disability-adjusted life years (DALYs), monetized impacts, functional-unit based measures, e.g., benefits per ton of emissions reduced, and other economic indicators, e.g., cost-benefit ratios. These metrics are evaluated by considering their comprehensiveness, the spatial and temporal resolution of the analysis, how equity considerations are facilitated, and the analysis and presentation of uncertainty. In the case study, the greatest number of avoided cases occurs for low severity morbidity outcomes, e.g., asthma exacerbations (n=28,000) and minor-restricted activity days (n=37,000); while DALYs and monetized impacts are driven by the severity, duration and value assigned to a relatively low number of premature deaths (n=190 to 230 per year). The selection of appropriate metrics depends on the problem context and boundaries, the severity of impacts, and community values regarding health. The number of avoided cases provides an estimate of the number of people affected, and monetized impacts facilitate additional economic analyses useful to policy analysis. DALYs are commonly used as an aggregate measure of health impacts and can be used to compare impacts across studies. Benefits per ton metrics may be appropriate when changes in emissions rates can be estimated. To address community concerns and HIA objectives, a combination of metrics is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography

    PubMed Central

    Sánchez, Adrian A.

    2016-01-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968

  18. Estimation of noise properties for TV-regularized image reconstruction in computed tomography.

    PubMed

    Sánchez, Adrian A

    2015-09-21

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  19. Estimation of noise properties for TV-regularized image reconstruction in computed tomography

    NASA Astrophysics Data System (ADS)

    Sánchez, Adrian A.

    2015-09-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  20. Value redefined for inflammatory bowel disease patients: a choice-based conjoint analysis of patients' preferences.

    PubMed

    van Deen, Welmoed K; Nguyen, Dominic; Duran, Natalie E; Kane, Ellen; van Oijen, Martijn G H; Hommes, Daniel W

    2017-02-01

    Value-based healthcare is an upcoming field. The core idea is to evaluate care based on achieved outcomes divided by the costs. Unfortunately, the optimal way to evaluate outcomes is ill-defined. In this study, we aim to develop a single, preference based, outcome metric, which can be used to quantify overall health value in inflammatory bowel disease (IBD). IBD patients filled out a choice-based conjoint (CBC) questionnaire in which patients chose preferable outcome scenarios with different levels of disease control (DC), quality of life (QoL), and productivity (Pr). A CBC analysis was performed to estimate the relative value of DC, QoL, and Pr. A patient-centered composite score was developed which was weighted based on the stated preferences. We included 210 IBD patients. Large differences in stated preferences were observed. Increases from low to intermediate outcome levels were valued more than increases from intermediate to high outcome levels. Overall, QoL was more important to patients than DC or Pr. Individual outcome scores were calculated based on the stated preferences. This score was significantly different from a score not weighted based on patient preferences in patients with active disease. We showed the feasibility of creating a single outcome metric in IBD which incorporates patients' values using a CBC. Because this metric changes significantly when weighted according to patients' values, we propose that success in healthcare should be measured accordingly.

  1. Family Medicine Panel Size with Care Teams: Impact on Quality.

    PubMed

    Angstman, Kurt B; Horn, Jennifer L; Bernard, Matthew E; Kresin, Molly M; Klavetter, Eric W; Maxson, Julie; Willis, Floyd B; Grover, Michael L; Bryan, Michael J; Thacher, Tom D

    2016-01-01

    The demand for comprehensive primary health care continues to expand. The development of team-based practice allows for improved capacity within a collective, collaborative environment. Our hypothesis was to determine the relationship between panel size and access, quality, patient satisfaction, and cost in a large family medicine group practice using a team-based care model. Data were retrospectively collected from 36 family physicians and included total panel size of patients, percentage of time spent on patient care, cost of care, access metrics, diabetic quality metrics, patient satisfaction surveys, and patient care complexity scores. We used linear regression analysis to assess the relationship between adjusted physician panel size, panel complexity, and outcomes. The third available appointments (P < .01) and diabetic quality (P = .03) were negatively affected by increased panel size. Patient satisfaction, cost, and percentage fill rate were not affected by panel size. A physician-adjusted panel size larger than the current mean (2959 patients) was associated with a greater likelihood of poor-quality rankings (≤25th percentile) compared with those with a less than average panel size (odds ratio [OR], 7.61; 95% confidence interval [CI], 1.13-51.46). Increased panel size was associated with a longer time to the third available appointment (OR, 10.9; 95% CI, 1.36-87.26) compared with physicians with panel sizes smaller than the mean. We demonstrated a negative impact of larger panel size on diabetic quality results and available appointment access. Evaluation of a family medicine practice parameters while controlling for panel size and patient complexity may help determine the optimal panel size for a practice. © Copyright 2016 by the American Board of Family Medicine.

  2. Surveying ourselves: examining the use of a web-based approach for a physician survey.

    PubMed

    Matteson, Kristen A; Anderson, Britta L; Pinto, Stephanie B; Lopes, Vrishali; Schulkin, Jay; Clark, Melissa A

    2011-12-01

    A survey was distributed, using a sequential mixed-mode approach, to a national sample of obstetrician-gynecologists. Differences between responses to the web-based mode and the on-paper mode were compared to determine if there were systematic differences between respondents. Only two differences in respondents between the two modes were identified. University-based physicians were more likely to complete the web-based mode than private practice physicians. Mail respondents reported a greater volume of endometrial ablations compared to online respondents. The web-based mode had better data quality than the paper-based mailed mode in terms of less missing and inappropriate responses. Together, these findings suggest that, although a few differences were identified, the web-based survey mode attained adequate representativeness and improved data quality. Given the metrics examined for this study, exclusive use of web-based data collection may be appropriate for physician surveys with a minimal reduction in sample coverage and without a reduction in data quality.

  3. Development of Quality Metrics in Ambulatory Pediatric Cardiology.

    PubMed

    Chowdhury, Devyani; Gurvitz, Michelle; Marelli, Ariane; Anderson, Jeffrey; Baker-Smith, Carissa; Diab, Karim A; Edwards, Thomas C; Hougen, Tom; Jedeikin, Roy; Johnson, Jonathan N; Karpawich, Peter; Lai, Wyman; Lu, Jimmy C; Mitchell, Stephanie; Newburger, Jane W; Penny, Daniel J; Portman, Michael A; Satou, Gary; Teitel, David; Villafane, Juan; Williams, Roberta; Jenkins, Kathy

    2017-02-07

    The American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Section had attempted to create quality metrics (QM) for ambulatory pediatric practice, but limited evidence made the process difficult. The ACPC sought to develop QMs for ambulatory pediatric cardiology practice. Five areas of interest were identified, and QMs were developed in a 2-step review process. In the first step, an expert panel, using the modified RAND-UCLA methodology, rated each QM for feasibility and validity. The second step sought input from ACPC Section members; final approval was by a vote of the ACPC Council. Work groups proposed a total of 44 QMs. Thirty-one metrics passed the RAND process and, after the open comment period, the ACPC council approved 18 metrics. The project resulted in successful development of QMs in ambulatory pediatric cardiology for a range of ambulatory domains. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  4. International assessment on quality and content of internet information on osteoarthritis.

    PubMed

    Varady, N H; Dee, E C; Katz, J N

    2018-05-23

    Osteoarthritis is one of the leading causes of global disability. Numerous studies have assessed the quality and content of online health information; however, how information content varies between multiple countries remains unknown. The primary objective of this study was to examine how the quality and content of online health information on osteoarthritis compares on an international scale. Internet searches for the equivalent of "knee osteoarthritis treatment" were performed in ten countries around the world. For each country, the first ten websites were evaluated using a custom scoring form examining: website type; quality and reliability using the DISCERN and Health-on-the-Net (HON) frameworks; and treatment content based on three international osteoarthritis treatment guidelines. Consistency of search results between countries speaking the same language was also assessed. Significant differences in all scoring metrics existed between countries speaking different languages. Western countries scored higher than more eastern countries, there were no differences between the United States and Mexico in any of the scoring metrics, and HON certified websites were of higher quality and reliability. Searches in different countries speaking the same language had at least 70% overlap. The quality of online health information on knee osteoarthritis varies significantly between countries speaking different languages. Differential access to quality, accurate, and safe health information online may represent a novel but important health inequality. Future efforts are needed to translate online health resources into additional languages. In the interim, patients may seek websites that display the HON seal. Copyright © 2018. Published by Elsevier Ltd.

  5. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties.

    PubMed

    Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2015-10-01

    Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.

  6. Underwater video enhancement using multi-camera super-resolution

    NASA Astrophysics Data System (ADS)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  7. Evaluation of ride quality prediction methods for helicopter interior noise and vibration environments

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.

    1984-01-01

    The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.

  8. New Quality Metrics for Web Search Results

    NASA Astrophysics Data System (ADS)

    Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni

    Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.

  9. Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI

    PubMed Central

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward

    2016-01-01

    Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592

  10. Measuring the impact of pharmacoepidemiologic research using altmetrics: A case study of a CNODES drug-safety article.

    PubMed

    Gamble, J M; Traynor, Robyn L; Gruzd, Anatoliy; Mai, Philip; Dormuth, Colin R; Sketris, Ingrid S

    2018-03-24

    To provide an overview of altmetrics, including their potential benefits and limitations, how they may be obtained, and their role in assessing pharmacoepidemiologic research impact. Our review was informed by compiling relevant literature identified through searching multiple health research databases (PubMed, Embase, and CIHNAHL) and grey literature sources (websites, blogs, and reports). We demonstrate how pharmacoepidemiologists, in particular, may use altmetrics to understand scholarly impact and knowledge translation by providing a case study of a drug-safety study conducted by the Canadian Network of Observational Drug Effect Studies. A common approach to measuring research impact is the use of citation-based metrics, such as an article's citation count or a journal's impact factor. "Alternative" metrics, or altmetrics, are increasingly supported as a complementary measure of research uptake in the age of social media. Altmetrics are nontraditional indicators that capture a diverse set of traceable, online research-related artifacts including peer-reviewed publications and other research outputs (software, datasets, blogs, videos, posters, policy documents, presentations, social media posts, wiki entries, etc). Compared with traditional citation-based metrics, altmetrics take a more holistic view of research impact, attempting to capture the activity and engagement of both scholarly and nonscholarly communities. Despite the limited theoretical underpinnings, possible commercial influence, potential for gaming and manipulation, and numerous data quality-related issues, altmetrics are promising as a supplement to more traditional citation-based metrics because they can ingest and process a larger set of data points related to the flow and reach of scholarly communication from an expanded pool of stakeholders. Unlike citation-based metrics, altmetrics are not inherently rooted in the research publication process, which includes peer review; it is unclear to what extent they should be used for research evaluation. © 2018 The Authors. Pharmacoepidemiology and Drug Safety. Published by John Wiley & Sons, Ltd.

  11. SU-E-T-245: Detection of the Photon Target Damage in Varian Linac Based On Periodical Quality Assurance Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, S; Balter, P; Wang, X

    2015-06-15

    Purpose: To determine the best dosimetric metric measured by our routine QA devices for diagnosing photon target failure on a Varian C-series linac. Methods: We have retrospectively reviewed and analyzed the dosimetry data from a Varian linac with a target degradation that was undiagnosed for one year. A failure in the daily QA symmetry tests was the first indication of an issue. The beam was steered back to a symmetric shape and water scans indicated the beam energy had changed but stayed within the manufacturer’s specifications and agreed reasonably with our treatment planning system data. After the problem was identifiedmore » and the target was replaced, we retrospectively analyzed our QA data including diagonals normalized flatness (F-DN) from the daily device (DQA3), profiles from an ionization chamber array (IC Profiler), as well as profiles and PDDs from a 3D water Scanner (3DS). These metrics were cross-compared to determine which was the best early indicator of target degradation. Results: A 3% change in FDN measured by the DQA3 was found to be an early indicator of target degradation. It is more sensitive than changes in output, symmetry, flatness or PDD. All beam shape metrics (flatness at dmax and 10 cm depth, and F-DN) indicated an energy increase while the PDD indicated an energy decrease. This disagreement between the beam-shape based energy metrics (F-DN and flatness) and PDD based energy metric may indicate target failure as opposed to an energy change resulting from changes in the incident electron energy. Conclusion: Photon target degradation has been identified as a failure mode for linacs. The manufacturer’s test for this condition is highly invasive and requires machine down time. We have demonstrated that the condition could be caught early based upon data acquired during routine QA activities, such as the F-DN.« less

  12. A Risk-based Assessment And Management Framework For Multipollutant Air Quality

    PubMed Central

    Frey, H. Christopher; Hubbell, Bryan

    2010-01-01

    The National Research Council recommended both a risk- and performance-based multipollutant approach to air quality management. Specifically, management decisions should be based on minimizing the exposure to, and risk of adverse effects from, multiple sources of air pollution and that the success of these decisions should be measured by how well they achieved this objective. We briefly describe risk analysis and its application within the current approach to air quality management. Recommendations are made as to how current practice could evolve to support a fully risk- and performance-based multipollutant air quality management system. The ability to implement a risk assessment framework in a credible and policy-relevant manner depends on the availability of component models and data which are scientifically sound and developed with an understanding of their application in integrated assessments. The same can be said about accountability assessments used to evaluate the outcomes of decisions made using such frameworks. The existing risk analysis framework, although typically applied to individual pollutants, is conceptually well suited for analyzing multipollutant management actions. Many elements of this framework, such as emissions and air quality modeling, already exist with multipollutant characteristics. However, the framework needs to be supported with information on exposure and concentration response relationships that result from multipollutant health studies. Because the causal chain that links management actions to emission reductions, air quality improvements, exposure reductions and health outcomes is parallel between prospective risk analyses and retrospective accountability assessments, both types of assessment should be placed within a single framework with common metrics and indicators where possible. Improvements in risk reductions can be obtained by adopting a multipollutant risk analysis framework within the current air quality management system, e.g. focused on standards for individual pollutants and with separate goals for air toxics and ambient pollutants. However, additional improvements may be possible if goals and actions are defined in terms of risk metrics that are comparable across criteria pollutants and air toxics (hazardous air pollutants), and that encompass both human health and ecological risks. PMID:21209847

  13. On using multiple routing metrics with destination sequenced distance vector protocol for MultiHop wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Mehic, M.; Fazio, P.; Voznak, M.; Partila, P.; Komosny, D.; Tovarek, J.; Chmelikova, Z.

    2016-05-01

    A mobile ad hoc network is a collection of mobile nodes which communicate without a fixed backbone or centralized infrastructure. Due to the frequent mobility of nodes, routes connecting two distant nodes may change. Therefore, it is not possible to establish a priori fixed paths for message delivery through the network. Because of its importance, routing is the most studied problem in mobile ad hoc networks. In addition, if the Quality of Service (QoS) is demanded, one must guarantee the QoS not only over a single hop but over an entire wireless multi-hop path which may not be a trivial task. In turns, this requires the propagation of QoS information within the network. The key to the support of QoS reporting is QoS routing, which provides path QoS information at each source. To support QoS for real-time traffic one needs to know not only minimum delay on the path to the destination but also the bandwidth available on it. Therefore, throughput, end-to-end delay, and routing overhead are traditional performance metrics used to evaluate the performance of routing protocol. To obtain additional information about the link, most of quality-link metrics are based on calculation of the lost probabilities of links by broadcasting probe packets. In this paper, we address the problem of including multiple routing metrics in existing routing packets that are broadcasted through the network. We evaluate the efficiency of such approach with modified version of DSDV routing protocols in ns-3 simulator.

  14. ICU director data: using data to assess value, inform local change, and relate to the external world.

    PubMed

    Murphy, David J; Ogbu, Ogbonna C; Coopersmith, Craig M

    2015-04-01

    Improving value within critical care remains a priority because it represents a significant portion of health-care spending, faces high rates of adverse events, and inconsistently delivers evidence-based practices. ICU directors are increasingly required to understand all aspects of the value provided by their units to inform local improvement efforts and relate effectively to external parties. A clear understanding of the overall process of measuring quality and value as well as the strengths, limitations, and potential application of individual metrics is critical to supporting this charge. In this review, we provide a conceptual framework for understanding value metrics, describe an approach to developing a value measurement program, and summarize common metrics to characterize ICU value. We first summarize how ICU value can be represented as a function of outcomes and costs. We expand this equation and relate it to both the classic structure-process-outcome framework for quality assessment and the Institute of Medicine's six aims of health care. We then describe how ICU leaders can develop their own value measurement process by identifying target areas, selecting appropriate measures, acquiring the necessary data, analyzing the data, and disseminating the findings. Within this measurement process, we summarize common metrics that can be used to characterize ICU value. As health care, in general, and critical care, in particular, changes and data become more available, it is increasingly important for ICU leaders to understand how to effectively acquire, evaluate, and apply data to improve the value of care provided to patients.

  15. Effect of Pupil Size on Wavefront Refraction during Orthokeratology.

    PubMed

    Faria-Ribeiro, Miguel; Navarro, Rafael; González-Méijome, José Manuel

    2016-11-01

    It has been hypothesized that central and peripheral refraction, in eyes treated with myopic overnight orthokeratology, might vary with changes in pupil diameter. The aim of this work was to evaluate the axial and peripheral refraction and optical quality after orthokeratology, using ray tracing software for different pupil sizes. Zemax-EE was used to generate a series of 29 semi-customized model eyes based on the corneal topography changes from 29 patients who had undergone myopic orthokeratology. Wavefront refraction in the central 80 degrees of the visual field was calculated using three different quality metrics criteria: Paraxial curvature matching, minimum root mean square error (minRMS), and the Through Focus Visual Strehl of the Modulation Transfer Function (VSMTF), for 3- and 6-mm pupil diameters. The three metrics predicted significantly different values for foveal and peripheral refractions. Compared with the Paraxial criteria, the other two metrics predicted more myopic refractions on- and off-axis. Interestingly, the VSMTF predicts only a marginal myopic shift in the axial refraction as the pupil changes from 3 to 6 mm. For peripheral refraction, minRMS and VSMTF metric criteria predicted a higher exposure to peripheral defocus as the pupil increases from 3 to 6 mm. The results suggest that the supposed effect of myopic control produced by ortho-k treatments might be dependent on pupil size. Although the foveal refractive error does not seem to change appreciably with the increase in pupil diameter (VSMTF criteria), the high levels of positive spherical aberration will lead to a degradation of lower spatial frequencies, that is more significant under low illumination levels.

  16. 46 CFR 298.11 - Vessel requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...

  17. 46 CFR 298.11 - Vessel requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...

  18. 46 CFR 298.11 - Vessel requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...

  19. 46 CFR 298.11 - Vessel requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...

  20. 46 CFR 298.11 - Vessel requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...

  1. Neuropsychologic assessment of a population-based sample of Gulf War veterans.

    PubMed

    Wallin, Mitchell T; Wilken, Jeffrey; Alfaro, Mercedes H; Rogers, Catherine; Mahan, Clare; Chapman, Julie C; Fratto, Timothy; Sullivan, Cynthia; Kang, Han; Kane, Robert

    2009-09-01

    The objective of this project was to compare neuropsychologic performance and quality of life in a population-based sample of deployed Gulf War (GW) veterans with and without multisymptom complaints. The study participants were obtained from the 30,000 member population-based National Health Survey of GW-era veterans conducted in 1995. Cases (N=25) were deployed to the year 1990 and 1991 GW and met Center for Disease Control and Prevention criteria for multisymptom GW illness (GWI). Controls (N=16) were deployed to the 1990 and 1991 GW but did not meet Center for Disease Control and Prevention criteria for GWI. There were no significant differences in composite scores on the traditional and computerized neuropsychologic battery (automated neuropsychologic assessment metrics) between GW cases and controls using bivariate techniques. Multiple linear regression analyses controlling for demographic and clinical variables revealed composite automated neuropsychologic assessment metrics scores were associated with age (b=-7.8; P=0.084), and education (b=22.9; P=0.0012), but not GW case or control status (b=-63.9; P=0.22). Compared with controls, GW cases had significantly more impairment on the Personality Assessment Inventory and the short form-36. Compared with GW controls, GW cases meeting criteria for GWI had preserved cognition function but had significant psychiatric symptoms and lower quality of life.

  2. Possible causes of data model discrepancy in the temperature history of the last Millennium.

    PubMed

    Neukom, Raphael; Schurer, Andrew P; Steiger, Nathan J; Hegerl, Gabriele C

    2018-05-15

    Model simulations and proxy-based reconstructions are the main tools for quantifying pre-instrumental climate variations. For some metrics such as Northern Hemisphere mean temperatures, there is remarkable agreement between models and reconstructions. For other diagnostics, such as the regional response to volcanic eruptions, or hemispheric temperature differences, substantial disagreements between data and models have been reported. Here, we assess the potential sources of these discrepancies by comparing 1000-year hemispheric temperature reconstructions based on real-world paleoclimate proxies with climate-model-based pseudoproxies. These pseudoproxy experiments (PPE) indicate that noise inherent in proxy records and the unequal spatial distribution of proxy data are the key factors in explaining the data-model differences. For example, lower inter-hemispheric correlations in reconstructions can be fully accounted for by these factors in the PPE. Noise and data sampling also partly explain the reduced amplitude of the response to external forcing in reconstructions compared to models. For other metrics, such as inter-hemispheric differences, some, although reduced, discrepancy remains. Our results suggest that improving proxy data quality and spatial coverage is the key factor to increase the quality of future climate reconstructions, while the total number of proxy records and reconstruction methodology play a smaller role.

  3. Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.

    PubMed

    Bateson, Thomas F; Kopylev, Leonid

    2015-01-01

    Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.

  4. [QUALITY MEASURES IN MEDICINE-- A PLEA FOR NEW, VALUE BASED THINKING].

    PubMed

    Fisher, Menachem; Wagner, Oded; Keinarl, Talia; Solt, Ido

    2015-09-01

    Quality is an important and basic conduct of complex systems in general and health systems in particular. Quality is a cornerstone of medicine, necessary in the eyes of the community of consumers, caregivers, and the systems that manage both. In Israel, the Ministry of Health has set the quality issue on the agenda of healthcare organizations in all existing frameworks. In this article we seek to offer an acceptable alternative perspective, in examining the quality of public health. We suggest highlighting the ethical aspect of medical care, while reducing the quantitative monitoring component of existing quality metrics. Relying solely on indices has negative effects that might cause damage. The proposed alternative focuses on the personal responsibility of health care providers, using. values and moral reasonin.

  5. Naturalness preservation image contrast enhancement via histogram modification

    NASA Astrophysics Data System (ADS)

    Tian, Qi-Chong; Cohen, Laurent D.

    2018-04-01

    Contrast enhancement is a technique for enhancing image contrast to obtain better visual quality. Since many existing contrast enhancement algorithms usually produce over-enhanced results, the naturalness preservation is needed to be considered in the framework of image contrast enhancement. This paper proposes a naturalness preservation contrast enhancement method, which adopts the histogram matching to improve the contrast and uses the image quality assessment to automatically select the optimal target histogram. The contrast improvement and the naturalness preservation are both considered in the target histogram, so this method can avoid the over-enhancement problem. In the proposed method, the optimal target histogram is a weighted sum of the original histogram, the uniform histogram, and the Gaussian-shaped histogram. Then the structural metric and the statistical naturalness metric are used to determine the weights of corresponding histograms. At last, the contrast-enhanced image is obtained via matching the optimal target histogram. The experiments demonstrate the proposed method outperforms the compared histogram-based contrast enhancement algorithms.

  6. qDIET: toward an automated, self-sustaining knowledge base to facilitate linking point-of-sale grocery items to nutritional content

    PubMed Central

    Chidambaram, Valliammai; Brewster, Philip J.; Jordan, Kristine C.; Hurdle, John F.

    2013-01-01

    The United States, indeed the world, struggles with a serious obesity epidemic. The costs of this epidemic in terms of healthcare dollar expenditures and human morbidity/mortality are staggering. Surprisingly, clinicians are ill-equipped in general to advise patients on effective, longitudinal weight loss strategies. We argue that one factor hindering clinicians and patients in effective shared decision-making about weight loss is the absence of a metric that can be reasoned about and monitored over time, as clinicians do routinely with, say, serum lipid levels or HgA1C. We propose that a dietary quality measure championed by the USDA and NCI, the HEI-2005/2010, is an ideal metric for this purpose. We describe a new tool, the quality Dietary Information Extraction Tool (qDIET), which is a step toward an automated, self-sustaining process that can link retail grocery purchase data to the appropriate USDA databases to permit the calculation of the HEI-2005/2010. PMID:24551333

  7. Investigation of iterative image reconstruction in low-dose breast CT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan

    2014-06-01

    There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.

  8. Method for quantifying optical properties of the human lens

    DOEpatents

    Loree, deceased, Thomas R.; Bigio, Irving J.; Zuclich, Joseph A.; Shimada, Tsutomu; Strobl, Karlheinz

    1999-01-01

    Method for quantifying optical properties of the human lens. The present invention includes the application of fiberoptic, OMA-based instrumentation as an in vivo diagnostic tool for the human ocular lens. Rapid, noninvasive and comprehensive assessment of the optical characteristics of a lens using very modest levels of exciting light are described. Typically, the backscatter and fluorescence spectra (from about 300- to 900-nm) elicited by each of several exciting wavelengths (from about 300- to 600-nm) are collected within a few seconds. The resulting optical signature of individual lenses is then used to assess the overall optical quality of the lens by comparing the results with a database of similar measurements obtained from a reference set of normal human lenses having various ages. Several metrics have been identified which gauge the optical quality of a given lens relative to the norm for the subject's chronological age. These metrics may also serve to document accelerated optical aging and/or as early indicators of cataract or other disease processes.

  9. Method for quantifying optical properties of the human lens

    DOEpatents

    Loree, T.R.; Bigio, I.J.; Zuclich, J.A.; Shimada, Tsutomu; Strobl, K.

    1999-04-13

    A method is disclosed for quantifying optical properties of the human lens. The present invention includes the application of fiberoptic, OMA-based instrumentation as an in vivo diagnostic tool for the human ocular lens. Rapid, noninvasive and comprehensive assessment of the optical characteristics of a lens using very modest levels of exciting light are described. Typically, the backscatter and fluorescence spectra (from about 300- to 900-nm) elicited by each of several exciting wavelengths (from about 300- to 600-nm) are collected within a few seconds. The resulting optical signature of individual lenses is then used to assess the overall optical quality of the lens by comparing the results with a database of similar measurements obtained from a reference set of normal human lenses having various ages. Several metrics have been identified which gauge the optical quality of a given lens relative to the norm for the subject`s chronological age. These metrics may also serve to document accelerated optical aging and/or as early indicators of cataract or other disease processes. 8 figs.

  10. Perspectives of Patients, Clinicians, and Health System Leaders on Changes Needed to Improve the Health Care and Outcomes of Older Adults With Multiple Chronic Conditions.

    PubMed

    Ferris, Rosie; Blaum, Caroline; Kiwak, Eliza; Austin, Janet; Esterson, Jessica; Harkless, Gene; Oftedahl, Gary; Parchman, Michael; Van Ness, Peter H; Tinetti, Mary E

    2018-06-01

    To ascertain perspectives of multiple stakeholders on contributors to inappropriate care for older adults with multiple chronic conditions. Perspectives of 36 purposively sampled patients, clinicians, health systems, and payers were elicited. Data analysis followed a constant comparative method. Structural factors triggering burden and fragmentation include disease-based quality metrics and need to interact with multiple clinicians. The key cultural barrier identified is the assumption that "physicians know best." Inappropriate decision making may result from inattention to trade-offs and adherence to multiple disease guidelines. Stakeholders recommended changes in culture, structure, and decision making. Care options and quality metrics should reflect a focus on patients' priorities. Clinician-patient partnerships should reflect patients knowing their health goals and clinicians knowing how to achieve them. Access to specialty expertise should not require visits. Stakeholders' recommendations suggest health care redesigns that incorporate patients' health priorities into care decisions and realign relationships across patients and clinicians.

  11. Characterization of Signal Quality Monitoring Techniques for Multipath Detection in GNSS Applications.

    PubMed

    Pirsiavash, Ali; Broumandan, Ali; Lachapelle, Gérard

    2017-07-05

    The performance of Signal Quality Monitoring (SQM) techniques under different multipath scenarios is analyzed. First, SQM variation profiles are investigated as critical requirements in evaluating the theoretical performance of SQM metrics. The sensitivity and effectiveness of SQM approaches for multipath detection and mitigation are then defined and analyzed by comparing SQM profiles and multipath error envelopes for different discriminators. Analytical discussions includes two discriminator strategies, namely narrow and high resolution correlator techniques for BPSK(1), and BOC(1,1) signaling schemes. Data analysis is also carried out for static and kinematic scenarios to validate the SQM profiles and examine SQM performance in actual multipath environments. Results show that although SQM is sensitive to medium and long-delay multipath, its effectiveness in mitigating these ranges of multipath errors varies based on tracking strategy and signaling scheme. For short-delay multipath scenarios, the multipath effect on pseudorange measurements remains mostly undetected due to the low sensitivity of SQM metrics.

  12. qDIET: toward an automated, self-sustaining knowledge base to facilitate linking point-of-sale grocery items to nutritional content.

    PubMed

    Chidambaram, Valliammai; Brewster, Philip J; Jordan, Kristine C; Hurdle, John F

    2013-01-01

    The United States, indeed the world, struggles with a serious obesity epidemic. The costs of this epidemic in terms of healthcare dollar expenditures and human morbidity/mortality are staggering. Surprisingly, clinicians are ill-equipped in general to advise patients on effective, longitudinal weight loss strategies. We argue that one factor hindering clinicians and patients in effective shared decision-making about weight loss is the absence of a metric that can be reasoned about and monitored over time, as clinicians do routinely with, say, serum lipid levels or HgA1C. We propose that a dietary quality measure championed by the USDA and NCI, the HEI-2005/2010, is an ideal metric for this purpose. We describe a new tool, the quality Dietary Information Extraction Tool (qDIET), which is a step toward an automated, self-sustaining process that can link retail grocery purchase data to the appropriate USDA databases to permit the calculation of the HEI-2005/2010.

  13. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    NASA Astrophysics Data System (ADS)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  14. Next-generation audit and feedback for inpatient quality improvement using electronic health record data: a cluster randomised controlled trial.

    PubMed

    Patel, Sajan; Rajkomar, Alvin; Harrison, James D; Prasad, Priya A; Valencia, Victoria; Ranji, Sumant R; Mourad, Michelle

    2018-03-05

    Audit and feedback improves clinical care by highlighting the gap between current and ideal practice. We combined best practices of audit and feedback with continuously generated electronic health record data to improve performance on quality metrics in an inpatient setting. We conducted a cluster randomised control trial comparing intensive audit and feedback with usual audit and feedback from February 2016 to June 2016. The study subjects were internal medicine teams on the teaching service at an urban tertiary care hospital. Teams in the intensive feedback arm received access to a daily-updated team-based data dashboard as well as weekly inperson review of performance data ('STAT rounds'). The usual feedback arm received ongoing twice-monthly emails with graphical depictions of team performance on selected quality metrics. The primary outcome was performance on a composite discharge metric (Discharge Mix Index, 'DMI'). A washout period occurred at the end of the trial (from May through June 2016) during which STAT rounds were removed from the intensive feedback arm. A total of 40 medicine teams participated in the trial. During the intervention period, the primary outcome of completion of the DMI was achieved on 79.3% (426/537) of patients in the intervention group compared with 63.2% (326/516) in the control group (P<0.0001). During the washout period, there was no significant difference in performance between the intensive and usual feedback groups. Intensive audit and feedback using timely data and STAT rounds significantly increased performance on a composite discharge metric compared with usual feedback. With the cessation of STAT rounds, performance between the intensive and usual feedback groups did not differ significantly, highlighting the importance of feedback delivery on effecting change. The trial was registered with ClinicalTrials.gov (NCT02593253). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Development of a perceptually calibrated objective metric of noise

    NASA Astrophysics Data System (ADS)

    Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey

    2011-01-01

    A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.

  16. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention is one of important factors for crosstalk assessment due to the fact that when viewing 3D contents, perceptual salient regions are highly likely to be a major contributor to determining the quality of experience of 3D contents. To take this into account, perceptual significant regions are extracted, and a spatial pooling technique is used to combine structural distortion map, depth map and visual salience map together to predict the perceived crosstalk more precisely. To verify the performance of the proposed crosstalk assessment metric, subjective experiments are conducted with 24 participants viewing and rating 60 simuli (5 scenes * 4 crosstalk levels * 3 camera distances). After an outliers removal and statistical process, the correlation with subjective test is examined using Pearson and Spearman rank-order correlation coefficient. Furthermore, the proposed method is also compared with two traditional 2D metrics, PSNR and SSIM. The objective score is mapped to subjective scale using a nonlinear fitting function to directly evaluate the performance of the metric. RESULIS: After the above-mentioned processes, the evaluation results demonstrate that the proposed metric is highly correlated with the subjective score when compared with the existing approaches. Because the Pearson coefficient of the proposed metric is 90.3%, it is promising for objective evaluation of the perceived crosstalk. NOVELTY: The main goal of our paper is to introduce an objective metric for stereo crosstalk assessment. The novelty contributions are twofold. First, an appropriate simulation of crosstalk by considering the characteristics of patterned retarder 3D display is developed. Second, an objective crosstalk metric based on visual attention model is introduced.

  17. WE-G-204-09: Medical Physics 2.0 in Practice: Automated QC Assessment of Clinical Chest Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willis, C; Willis, C; Nishino, T

    2015-06-15

    Purpose: To determine whether a proposed suite of objective image quality metrics for digital chest radiographs is useful for monitoring image quality in our clinical operation. Methods: Seventeen gridless AP Chest radiographs from a GE Optima portable digital radiography (DR) unit (Group 1), seventeen (routine) PA Chest radiographs from a GE Discovery DR unit (Group 2), and sixteen gridless (non-routine) PA Chest radiographs from the same Discovery DR unit (Group 3) were chosen for analysis. Groups were selected to represent “sub-standard” (Group 1), “standard-of-care” (Group 2), and images with a gross technical error (Group 3). Group 1 images were acquiredmore » with lower kVp (90 vs. 125), shorter source-to-image distance (127cm vs 183cm) and were expected to have lower quality than images in Group 2. Group 3 was expected to have degraded contrast versus Group 2.This evaluation was approved by the institutional Quality Improvement Assurance Board (QIAB). Images were anonymized and securely transferred to the Duke University Clinical Imaging Physics Group for analysis using software previously described{sup 1} and validated{sup 2}. Image quality for individual images was reported in terms of lung grey level(Lgl); lung noise(Ln); rib-lung contrast(RLc); rib sharpness(Rs); mediastinum detail(Md), noise(Mn), and alignment(Ma); subdiaphragm-lung contrast(SLc); and subdiaphragm area(Sa). Metrics were compared across groups. Results: Metrics agreed with published Quality Consistency Ranges with three exceptions: higher Lgl, lower RLc, and SDc. Higher bit depth (16 vs 12) accounted for higher Lgl values in our images. Values were most internally consistent for Group 2. The most sensitive metric for distinguishing between groups was Mn followed closely by Ln. The least sensitive metrics were Md and RLc. Conclusion: The software appears promising for objectively and automatically identifying substandard images in our operation. The results can be used to establish local quality consistency ranges and action limits per facility preferences.« less

  18. QualityML: a dictionary for quality metadata encoding

    NASA Astrophysics Data System (ADS)

    Ninyerola, Miquel; Sevillano, Eva; Serral, Ivette; Pons, Xavier; Zabala, Alaitz; Bastin, Lucy; Masó, Joan

    2014-05-01

    The scenario of rapidly growing geodata catalogues requires tools focused on facilitate users the choice of products. Having quality fields populated in metadata allow the users to rank and then select the best fit-for-purpose products. In this direction, we have developed the QualityML (http://qualityml.geoviqua.org), a dictionary that contains hierarchically structured concepts to precisely define and relate quality levels: from quality classes to quality measurements. Generically, a quality element is the path that goes from the higher level (quality class) to the lowest levels (statistics or quality metrics). This path is used to encode quality of datasets in the corresponding metadata schemas. The benefits of having encoded quality, in the case of data producers, are related with improvements in their product discovery and better transmission of their characteristics. In the case of data users, particularly decision-makers, they would find quality and uncertainty measures to take the best decisions as well as perform dataset intercomparison. Also it allows other components (such as visualization, discovery, or comparison tools) to be quality-aware and interoperable. On one hand, the QualityML is a profile of the ISO geospatial metadata standards providing a set of rules for precisely documenting quality indicator parameters that is structured in 6 levels. On the other hand, QualityML includes semantics and vocabularies for the quality concepts. Whenever possible, if uses statistic expressions from the UncertML dictionary (http://www.uncertml.org) encoding. However it also extends UncertML to provide list of alternative metrics that are commonly used to quantify quality. A specific example, based on a temperature dataset, is shown below. The annual mean temperature map has been validated with independent in-situ measurements to obtain a global error of 0.5 ° C. Level 0: Quality class (e.g., Thematic accuracy) Level 1: Quality indicator (e.g., Quantitative attribute correctness) Level 2: Measurement field (e.g., DifferentialErrors1D) Level 3: Statistic or Metric (e.g., Half-lengthConfidenceInterval) Level 4: Units (e.g. Celsius degrees) Level 5: Value (e.g.0.5) Level 6: Specifications. Additional information on how the measurement took place, citation of the reference data, the traceability of the process and a publication describing the validation process encoded using new 19157 elements or the GeoViQua (http://www.geoviqua.org) Quality Model (PQM-UQM) extensions to the ISO models. Finally, keep in mind, that QualityML is not just suitable for encoding dataset level but also considers pixel and object level uncertainties. This is done by link the metadata quality descriptions with layers representing not just the data but the uncertainty values associated with each geospatial element.

  19. What Does It Mean to Be Ranked a "High" or "Low" Value-Added Teacher? Observing Differences in Instructional Quality across Districts

    ERIC Educational Resources Information Center

    Blazar, David; Litke, Erica; Barmore, Johanna

    2016-01-01

    Education agencies are evaluating teachers using student achievement data. However, very little is known about the comparability of test-based or "value-added" metrics across districts and the extent to which they capture variability in classroom practices. Drawing on data from four urban districts, we found that teachers were…

  20. HOPE: An On-Line Piloted Handling Qualities Experiment Data Book

    NASA Technical Reports Server (NTRS)

    Jackson, E. B.; Proffitt, Melissa S.

    2010-01-01

    A novel on-line database for capturing most of the information obtained during piloted handling qualities experiments (either flight or simulated) is described. The Hyperlinked Overview of Piloted Evaluations (HOPE) web application is based on an open-source object-oriented Web-based front end (Ruby-on-Rails) that can be used with a variety of back-end relational database engines. The hyperlinked, on-line data book approach allows an easily-traversed way of looking at a variety of collected data, including pilot ratings, pilot information, vehicle and configuration characteristics, test maneuvers, and individual flight test cards and repeat runs. It allows for on-line retrieval of pilot comments, both audio and transcribed, as well as time history data retrieval and video playback. Pilot questionnaires are recorded as are pilot biographies. Simple statistics are calculated for each selected group of pilot ratings, allowing multiple ways to aggregate the data set (by pilot, by task, or by vehicle configuration, for example). Any number of per-run or per-task metrics can be captured in the database. The entire run metrics dataset can be downloaded in comma-separated text for further analysis off-line. It is expected that this tool will be made available upon request

  1. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  2. Hounsfield Unit inaccuracy in computed tomography lesion size and density, diagnostic quality vs attenuation correction

    NASA Astrophysics Data System (ADS)

    Szczepura, Katy; Thompson, John; Manning, David

    2017-03-01

    In computed tomography the Hounsfield Units (HU) are used as an indicator of the tissue type based on the linear attenuation coefficients of the tissue. HU accuracy is essential when this metric is used in any form to support diagnosis. In hybrid imaging, such as SPECT/CT and PET/CT, the information is used for attenuation correction (AC) of the emission images. This work investigates the HU accuracy of nodules of known size and HU, comparing diagnostic quality (DQ) images with images used for AC.

  3. Sigma Metrics Across the Total Testing Process.

    PubMed

    Charuruks, Navapun

    2017-03-01

    Laboratory quality control has been developed for several decades to ensure patients' safety, from a statistical quality control focus on the analytical phase to total laboratory processes. The sigma concept provides a convenient way to quantify the number of errors in extra-analytical and analytical phases through the defect per million and sigma metric equation. Participation in a sigma verification program can be a convenient way to monitor analytical performance continuous quality improvement. Improvement of sigma-scale performance has been shown from our data. New tools and techniques for integration are needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Automating Software Design Metrics.

    DTIC Science & Technology

    1984-02-01

    INTRODUCTION 1 ", ... 0..1 1.2 HISTORICAL PERSPECTIVE High quality software is of interest to both the software engineering com- munity and its users. As...contributions of many other software engineering efforts, most notably [MCC 77] and [Boe 83b], which have defined and refined a framework for quantifying...AUTOMATION OF DESIGN METRICS Software metrics can be useful within the context of an integrated soft- ware engineering environment. The purpose of this

  5. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    PubMed

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  6. Mineral resources of the Cranberry Wilderness Study Area, Webster and Pocahontas Counties, West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meissner, C.R. Jr.; Windolph, J.F. Jr.; Mory, P.C.

    1981-01-01

    The Cranberry Wilderness Study Area comprises 14,702 ha in the Monongahela National Forest, Webster and Pocahontas Counties, east-central West Virginia. The area is in the Yew Mountains of the Appalachian Plateaus and is at the eastern edge of the central Appalachian coal fields. Cranberry Glades, a peatland of botanical interest, lies at the southern end of the study area. All surface rights in the area are held by the US Forest Service; nearly 90% of the mineral rights are privately owned or subordinate to the surface rights. Bituminous coal of coking quality is the most economically important mineral resource inmore » the Cranberry Wilderness Study Area. Estimated resources in beds 35 cm thick or more are about 100 million metric tons in nine coal beds. Most measured-indicated coal, 70 cm thick or more (reserve base), is in a 7-km-wide east-west trending belt extending across the center of the study area. The estimated reserve base is 34,179 thousand metric tons. Estimated reserves in seven of the coal beds total 16,830 thousand metric tons and are recoverable by underground mining methods. Other mineral resources, all of which have a low potential for development in the study area, include peat, shale, and clay suitable for building brick and lightweight aggregate, sandstone for low-quality glass sand, and sandstone suitable for construction material. Evidence derived from drilling indicates little possibility for oil and gas in the study area. No evidence of economic metallic deposits was found during this investigation.« less

  7. Image and Video Quality Assessment Using LCD: Comparisons with CRT Conditions

    NASA Astrophysics Data System (ADS)

    Tourancheau, Sylvain; Callet, Patrick Le; Barba, Dominique

    In this paper, the impact of display on quality assessment is addressed. Subjective quality assessment experiments have been performed on both LCD and CRT displays. Two sets of still images and two sets of moving pictures have been assessed using either an ACR or a SAMVIQ protocol. Altogether, eight experiments have been led. Results are presented and discussed, some differences are pointed out. Concerning moving pictures, these differences seem to be mainly due to LCD moving artefacts such as motion blur. LCD motion blur has been measured objectively and with psycho-physics experiments. A motion-blur metric based on the temporal characteristics of LCD can be defined. A prediction model have been then designed which predict the differences of perceived quality between CRT and LCD. This motion-blur-based model enables the estimation of perceived quality on LCD with respect to the perceived quality on CRT. Technical solutions to LCD motion blur can thus be evaluated on natural contents by this mean.

  8. Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks

    NASA Astrophysics Data System (ADS)

    Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.

    2017-07-01

    The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.

  9. Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics

    PubMed Central

    RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.

    2015-01-01

    BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665

  10. An investigation of the impact of variations of DVH calculation algorithms on DVH dependant radiation therapy plan evaluation metrics

    NASA Astrophysics Data System (ADS)

    Kennedy, A. M.; Lane, J.; Ebert, M. A.

    2014-03-01

    Plan review systems often allow dose volume histogram (DVH) recalculation as part of a quality assurance process for trials. A review of the algorithms provided by a number of systems indicated that they are often very similar. One notable point of variation between implementations is in the location and frequency of dose sampling. This study explored the impact such variations can have on DVH based plan evaluation metrics (Normal Tissue Complication Probability (NTCP), min, mean and max dose), for a plan with small structures placed over areas of high dose gradient. Dose grids considered were exported from the original planning system at a range of resolutions. We found that for the CT based resolutions used in all but one plan review systems (CT and CT with guaranteed minimum number of sampling voxels in the x and y direction) results were very similar and changed in a similar manner with changes in the dose grid resolution despite the extreme conditions. Differences became noticeable however when resolution was increased in the axial (z) direction. Evaluation metrics also varied differently with changing dose grid for CT based resolutions compared to dose grid based resolutions. This suggests that if DVHs are being compared between systems that use a different basis for selecting sampling resolution it may become important to confirm that a similar resolution was used during calculation.

  11. A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic.

    PubMed

    Fu, Lawrence D; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F

    2011-08-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.

    The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less

  13. Metrics for comparing climate impacts of short- and long-lived climate forcing agents

    NASA Astrophysics Data System (ADS)

    Fuglestvedt, J.; Berntsen, T.

    2013-12-01

    Human activities emit a wide variety of gases and aerosols, with different characteristics that influence both air quality and climate. The emissions affect climate both directly and indirectly and operate on both short and long timescales. Tools that allow these emissions to be placed on a common scale in terms of climate impact, i.e. metrics, have a number of applications (e.g. agreements and emission trading schemes, when considering potential trade-offs between changes in emissions). The Kyoto Protocol compares greenhouse gas (GHG) emissions using the Global Warming Potential (GWP) over a 100 year time-horizon. The IPCC First Assessment Report states the GWP was presented to illustrate the difficulties in comparing GHGs. There have been many critiques of the GWP and several alternative emission metrics have been proposed, but there has been little focus on understanding the linkages between, and interpretations of, different emission metrics. Furthermore, the capability to compare components with very different lifetimes and temporal behaviour needs consideration. The temperature based metrics (e.g. the Global Temperature change Potential (GTP)) require a model for the temperature response, and additional uncertainty is thus introduced. Short-lived forcers may also give more spatially heterogeneous responses, and the possibilities to capture these spatial variations by using other indicators than global mean RF or temperature change in metrics will be discussed. The ultimate choice of emission metric(s) and time-horizon(s) should, however, depend on the objectives of climate policy. Alternatives to the current 'multi-gas and single-basket' approach will also be explored and discussed (e.g. how a two-target approach may be implemented using a two-basket approach). One example is measures to reduce near-term rate of warming and long-term stabilization which can be implemented through two separate targets and two baskets with separate set of metrics for each target, but still keeping all components in both baskets.

  14. Improving Climate Projections Using "Intelligent" Ensembles

    NASA Technical Reports Server (NTRS)

    Baker, Noel C.; Taylor, Patrick C.

    2015-01-01

    Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.

  15. Automation Improves Schedule Quality and Increases Scheduling Efficiency for Residents.

    PubMed

    Perelstein, Elizabeth; Rose, Ariella; Hong, Young-Chae; Cohn, Amy; Long, Micah T

    2016-02-01

    Medical resident scheduling is difficult due to multiple rules, competing educational goals, and ever-evolving graduate medical education requirements. Despite this, schedules are typically created manually, consuming hours of work, producing schedules of varying quality, and yielding negative consequences for resident morale and learning. To determine whether computerized decision support can improve the construction of residency schedules, saving time and improving schedule quality. The Optimized Residency Scheduling Assistant was designed by a team from the University of Michigan Department of Industrial and Operations Engineering. It was implemented in the C.S. Mott Children's Hospital Pediatric Emergency Department in the 2012-2013 academic year. The 4 metrics of schedule quality that were compared between the 2010-2011 and 2012-2013 academic years were the incidence of challenging shift transitions, the incidence of shifts following continuity clinics, the total shift inequity, and the night shift inequity. All scheduling rules were successfully incorporated. Average schedule creation time fell from 22 to 28 hours to 4 to 6 hours per month, and 3 of 4 metrics of schedule quality significantly improved. For the implementation year, the incidence of challenging shift transitions decreased from 83 to 14 (P < .01); the incidence of postclinic shifts decreased from 72 to 32 (P < .01); and the SD of night shifts dropped by 55.6% (P < .01). This automated shift scheduling system improves the current manual scheduling process, reducing time spent and improving schedule quality. Embracing such automated tools can benefit residency programs with shift-based scheduling needs.

  16. Adding A Spending Metric To Medicare's Value-Based Purchasing Program Rewarded Low-Quality Hospitals.

    PubMed

    Das, Anup; Norton, Edward C; Miller, David C; Ryan, Andrew M; Birkmeyer, John D; Chen, Lena M

    2016-05-01

    In fiscal year 2015 the Centers for Medicare and Medicaid Services expanded its Hospital Value-Based Purchasing program by rewarding or penalizing hospitals for their performance on both spending and quality. This represented a sharp departure from the program's original efforts to incentivize hospitals for quality alone. How this change redistributed hospital bonuses and penalties was unknown. Using data from 2,679 US hospitals that participated in the program in fiscal years 2014 and 2015, we found that the new emphasis on spending rewarded not only low-spending hospitals but some low-quality hospitals as well. Thirty-eight percent of low-spending hospitals received bonuses in fiscal year 2014, compared to 100 percent in fiscal year 2015. However, low-quality hospitals also began to receive bonuses (0 percent in fiscal year 2014 compared to 17 percent in 2015). All high-quality hospitals received bonuses in both years. The Centers for Medicare and Medicaid Services should consider incorporating a minimum quality threshold into the Hospital Value-Based Purchasing program to avoid rewarding low-quality, low-spending hospitals. Project HOPE—The People-to-People Health Foundation, Inc.

  17. Automated map sharpening by maximization of detail and connectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  18. Robust MR-based approaches to quantifying white matter structure and structure/function alterations in Huntington's disease

    PubMed Central

    Steventon, Jessica J.; Trueman, Rebecca C.; Rosser, Anne E.; Jones, Derek K.

    2016-01-01

    Background Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. Method 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Results Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Conclusion Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. PMID:26335798

  19. Automated map sharpening by maximization of detail and connectivity

    DOE PAGES

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...

    2018-05-18

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  20. Universal health coverage in Rwanda: dream or reality

    PubMed Central

    Nyandekwe, Médard; Nzayirambaho, Manassé; Baptiste Kakoma, Jean

    2014-01-01

    Introduction Universal Health Coverage (UHC) has been a global concern for a long time and even more nowadays. While a number of publications are almost unanimous that Rwanda is not far from UHC, very few have focused on its financial sustainability and on its extreme external financial dependency. The objectives of this study are: (i) To assess Rwanda UHC based mainly on Community-Based Health Insurance (CBHI) from 2000 to 2012; (ii) to inform policy makers about observed gaps for a better way forward. Methods A retrospective (2000-2012) SWOT analysis was applied to six metrics as key indicators of UHC achievement related to WHO definition, i.e. (i) health insurance and access to care, (ii) equity, (iii) package of services, (iv) rights-based approach, (v) quality of health care, (vi) financial-risk protection, and (vii) CBHI self-financing capacity (SFC) was added by the authors. Results The first metric with 96,15% of overall health insurance coverage and 1.07 visit per capita per year versus 1 visit recommended by WHO, the second with 24,8% indigent people subsidized versus 24,1% living in extreme poverty, the third, the fourth, and the fifth metrics excellently performing, the sixth with 10.80% versus ≤40% as limit acceptable of catastrophic health spending level and lastly the CBHI SFC i.e. proper cost recovery estimated at 82.55% in 2011/2012, Rwanda UHC achievements are objectively convincing. Conclusion Rwanda UHC is not a dream but a reality if we consider all convincing results issued of the seven metrics. PMID:25170376

Top